00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2001 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3262 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.167 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.168 The recommended git tool is: git 00:00:00.168 using credential 00000000-0000-0000-0000-000000000002 00:00:00.169 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.231 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.681 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.692 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.703 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:07.703 > git config core.sparsecheckout # timeout=10 00:00:07.713 > git read-tree -mu HEAD # timeout=10 00:00:07.731 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:07.748 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:07.748 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:07.832 [Pipeline] Start of Pipeline 00:00:07.844 [Pipeline] library 00:00:07.846 Loading library shm_lib@master 00:00:07.846 Library shm_lib@master is cached. Copying from home. 00:00:07.856 [Pipeline] node 00:00:07.864 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:07.865 [Pipeline] { 00:00:07.873 [Pipeline] catchError 00:00:07.874 [Pipeline] { 00:00:07.882 [Pipeline] wrap 00:00:07.888 [Pipeline] { 00:00:07.893 [Pipeline] stage 00:00:07.895 [Pipeline] { (Prologue) 00:00:07.907 [Pipeline] echo 00:00:07.908 Node: VM-host-SM16 00:00:07.912 [Pipeline] cleanWs 00:00:07.919 [WS-CLEANUP] Deleting project workspace... 00:00:07.919 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.923 [WS-CLEANUP] done 00:00:08.067 [Pipeline] setCustomBuildProperty 00:00:08.161 [Pipeline] httpRequest 00:00:08.193 [Pipeline] echo 00:00:08.194 Sorcerer 10.211.164.101 is alive 00:00:08.200 [Pipeline] httpRequest 00:00:08.204 HttpMethod: GET 00:00:08.205 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.205 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.225 Response Code: HTTP/1.1 200 OK 00:00:08.226 Success: Status code 200 is in the accepted range: 200,404 00:00:08.226 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:16.026 [Pipeline] sh 00:00:16.304 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:16.321 [Pipeline] httpRequest 00:00:16.355 [Pipeline] echo 00:00:16.356 Sorcerer 10.211.164.101 is alive 00:00:16.366 [Pipeline] httpRequest 00:00:16.370 HttpMethod: GET 00:00:16.371 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:16.371 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:16.393 Response Code: HTTP/1.1 200 OK 00:00:16.394 Success: Status code 200 is in the accepted range: 200,404 00:00:16.394 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:08.817 [Pipeline] sh 00:01:09.093 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:12.385 [Pipeline] sh 00:01:12.662 + git -C spdk log --oneline -n5 00:01:12.662 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:12.662 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:12.662 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:12.662 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:12.662 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:12.676 [Pipeline] writeFile 00:01:12.690 [Pipeline] sh 00:01:12.963 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.976 [Pipeline] sh 00:01:13.255 + cat autorun-spdk.conf 00:01:13.255 SPDK_TEST_UNITTEST=1 00:01:13.255 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.255 SPDK_TEST_NVME=1 00:01:13.255 SPDK_TEST_BLOCKDEV=1 00:01:13.255 SPDK_RUN_ASAN=1 00:01:13.255 SPDK_RUN_UBSAN=1 00:01:13.255 SPDK_TEST_RAID5=1 00:01:13.255 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.262 RUN_NIGHTLY=1 00:01:13.263 [Pipeline] } 00:01:13.281 [Pipeline] // stage 00:01:13.298 [Pipeline] stage 00:01:13.300 [Pipeline] { (Run VM) 00:01:13.315 [Pipeline] sh 00:01:13.592 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.592 + echo 'Start stage prepare_nvme.sh' 00:01:13.592 Start stage prepare_nvme.sh 00:01:13.592 + [[ -n 7 ]] 00:01:13.592 + disk_prefix=ex7 00:01:13.592 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_2 ]] 00:01:13.592 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf ]] 00:01:13.592 + source /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf 00:01:13.592 ++ SPDK_TEST_UNITTEST=1 00:01:13.592 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.592 ++ SPDK_TEST_NVME=1 00:01:13.592 ++ SPDK_TEST_BLOCKDEV=1 00:01:13.592 ++ SPDK_RUN_ASAN=1 00:01:13.592 ++ SPDK_RUN_UBSAN=1 00:01:13.592 ++ SPDK_TEST_RAID5=1 00:01:13.592 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.592 ++ RUN_NIGHTLY=1 00:01:13.592 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:13.592 + nvme_files=() 00:01:13.592 + declare -A nvme_files 00:01:13.592 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.592 + nvme_files['nvme.img']=5G 00:01:13.592 + nvme_files['nvme-cmb.img']=5G 00:01:13.592 + nvme_files['nvme-multi0.img']=4G 00:01:13.592 + nvme_files['nvme-multi1.img']=4G 00:01:13.592 + nvme_files['nvme-multi2.img']=4G 00:01:13.592 + nvme_files['nvme-openstack.img']=8G 00:01:13.592 + nvme_files['nvme-zns.img']=5G 00:01:13.592 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.592 + (( SPDK_TEST_FTL == 1 )) 00:01:13.592 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.592 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.592 + for nvme in "${!nvme_files[@]}" 00:01:13.592 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:13.592 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.592 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:13.592 + echo 'End stage prepare_nvme.sh' 00:01:13.592 End stage prepare_nvme.sh 00:01:13.602 [Pipeline] sh 00:01:13.892 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.892 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f ubuntu2004 00:01:13.892 00:01:13.892 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant 00:01:13.892 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk 00:01:13.892 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:13.892 HELP=0 00:01:13.892 DRY_RUN=0 00:01:13.892 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:01:13.892 NVME_DISKS_TYPE=nvme, 00:01:13.892 NVME_AUTO_CREATE=0 00:01:13.892 NVME_DISKS_NAMESPACES=, 00:01:13.892 NVME_CMB=, 00:01:13.892 NVME_PMR=, 00:01:13.892 NVME_ZNS=, 00:01:13.892 NVME_MS=, 00:01:13.892 NVME_FDP=, 00:01:13.892 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:13.892 SPDK_VAGRANT_VMCPU=10 00:01:13.892 SPDK_VAGRANT_VMRAM=12288 00:01:13.892 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.892 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.892 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.892 SPDK_OPENSTACK_NETWORK=0 00:01:13.892 VAGRANT_PACKAGE_BOX=0 00:01:13.892 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:13.892 FORCE_DISTRO=true 00:01:13.892 VAGRANT_BOX_VERSION= 00:01:13.892 EXTRA_VAGRANTFILES= 00:01:13.892 NIC_MODEL=e1000 00:01:13.892 00:01:13.892 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt' 00:01:13.892 /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:17.193 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.451 ==> default: Creating image (snapshot of base box volume). 00:01:17.709 ==> default: Creating domain with the following settings... 00:01:17.709 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720779311_b88cfa7ba4ba1cebec0b 00:01:17.709 ==> default: -- Domain type: kvm 00:01:17.709 ==> default: -- Cpus: 10 00:01:17.709 ==> default: -- Feature: acpi 00:01:17.709 ==> default: -- Feature: apic 00:01:17.709 ==> default: -- Feature: pae 00:01:17.709 ==> default: -- Memory: 12288M 00:01:17.709 ==> default: -- Memory Backing: hugepages: 00:01:17.709 ==> default: -- Management MAC: 00:01:17.709 ==> default: -- Loader: 00:01:17.709 ==> default: -- Nvram: 00:01:17.709 ==> default: -- Base box: spdk/ubuntu2004 00:01:17.709 ==> default: -- Storage pool: default 00:01:17.709 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720779311_b88cfa7ba4ba1cebec0b.img (20G) 00:01:17.709 ==> default: -- Volume Cache: default 00:01:17.709 ==> default: -- Kernel: 00:01:17.709 ==> default: -- Initrd: 00:01:17.709 ==> default: -- Graphics Type: vnc 00:01:17.709 ==> default: -- Graphics Port: -1 00:01:17.709 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.709 ==> default: -- Graphics Password: Not defined 00:01:17.709 ==> default: -- Video Type: cirrus 00:01:17.709 ==> default: -- Video VRAM: 9216 00:01:17.709 ==> default: -- Sound Type: 00:01:17.709 ==> default: -- Keymap: en-us 00:01:17.709 ==> default: -- TPM Path: 00:01:17.709 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.709 ==> default: -- Command line args: 00:01:17.709 ==> default: -> value=-device, 00:01:17.709 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:17.709 ==> default: -> value=-drive, 00:01:17.709 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:17.709 ==> default: -> value=-device, 00:01:17.709 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.968 ==> default: Creating shared folders metadata... 00:01:17.968 ==> default: Starting domain. 00:01:19.868 ==> default: Waiting for domain to get an IP address... 00:01:29.901 ==> default: Waiting for SSH to become available... 00:01:30.834 ==> default: Configuring and enabling network interfaces... 00:01:33.365 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.635 ==> default: Mounting SSHFS shared folder... 00:01:38.635 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:38.635 ==> default: Checking Mount.. 00:01:41.186 ==> default: Checking Mount.. 00:01:41.186 ==> default: Folder Successfully Mounted! 00:01:41.186 ==> default: Running provisioner: file... 00:01:41.444 default: ~/.gitconfig => .gitconfig 00:01:41.444 00:01:41.444 SUCCESS! 00:01:41.444 00:01:41.444 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:41.444 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:41.444 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:41.444 00:01:41.453 [Pipeline] } 00:01:41.473 [Pipeline] // stage 00:01:41.482 [Pipeline] dir 00:01:41.483 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt 00:01:41.485 [Pipeline] { 00:01:41.500 [Pipeline] catchError 00:01:41.501 [Pipeline] { 00:01:41.515 [Pipeline] sh 00:01:41.788 + vagrant ssh-config --host vagrant 00:01:41.788 + sed -ne /^Host/,$p 00:01:41.788 + tee ssh_conf 00:01:45.068 Host vagrant 00:01:45.068 HostName 192.168.121.249 00:01:45.068 User vagrant 00:01:45.068 Port 22 00:01:45.068 UserKnownHostsFile /dev/null 00:01:45.068 StrictHostKeyChecking no 00:01:45.068 PasswordAuthentication no 00:01:45.068 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:45.068 IdentitiesOnly yes 00:01:45.068 LogLevel FATAL 00:01:45.068 ForwardAgent yes 00:01:45.068 ForwardX11 yes 00:01:45.068 00:01:45.081 [Pipeline] withEnv 00:01:45.083 [Pipeline] { 00:01:45.098 [Pipeline] sh 00:01:45.375 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:45.375 source /etc/os-release 00:01:45.375 [[ -e /image.version ]] && img=$(< /image.version) 00:01:45.375 # Minimal, systemd-like check. 00:01:45.375 if [[ -e /.dockerenv ]]; then 00:01:45.375 # Clear garbage from the node's name: 00:01:45.375 # agt-er_autotest_547-896 -> autotest_547-896 00:01:45.375 # $HOSTNAME is the actual container id 00:01:45.375 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:45.375 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:45.375 # We can assume this is a mount from a host where container is running, 00:01:45.375 # so fetch its hostname to easily identify the target swarm worker. 00:01:45.375 container="$(< /etc/hostname) ($agent)" 00:01:45.375 else 00:01:45.375 # Fallback 00:01:45.375 container=$agent 00:01:45.375 fi 00:01:45.375 fi 00:01:45.375 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:45.375 00:01:45.952 [Pipeline] } 00:01:45.971 [Pipeline] // withEnv 00:01:45.980 [Pipeline] setCustomBuildProperty 00:01:45.994 [Pipeline] stage 00:01:45.996 [Pipeline] { (Tests) 00:01:46.016 [Pipeline] sh 00:01:46.294 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.241 [Pipeline] sh 00:01:47.517 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.097 [Pipeline] timeout 00:01:48.097 Timeout set to expire in 1 hr 30 min 00:01:48.099 [Pipeline] { 00:01:48.114 [Pipeline] sh 00:01:48.390 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.346 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:49.353 [Pipeline] sh 00:01:49.623 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:50.189 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:50.207 [Pipeline] sh 00:01:50.486 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.070 [Pipeline] sh 00:01:51.349 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:51.916 ++ readlink -f spdk_repo 00:01:51.916 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.916 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.916 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.916 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.916 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.916 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.916 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.916 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:51.916 + cd /home/vagrant/spdk_repo 00:01:51.916 + source /etc/os-release 00:01:51.916 ++ NAME=Ubuntu 00:01:51.916 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:51.916 ++ ID=ubuntu 00:01:51.916 ++ ID_LIKE=debian 00:01:51.916 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:51.916 ++ VERSION_ID=20.04 00:01:51.916 ++ HOME_URL=https://www.ubuntu.com/ 00:01:51.916 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:51.916 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:51.916 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:51.916 ++ VERSION_CODENAME=focal 00:01:51.916 ++ UBUNTU_CODENAME=focal 00:01:51.916 + uname -a 00:01:51.916 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:51.916 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.916 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.174 Hugepages 00:01:52.174 node hugesize free / total 00:01:52.174 node0 1048576kB 0 / 0 00:01:52.174 node0 2048kB 0 / 0 00:01:52.174 00:01:52.174 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.174 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.174 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.174 + rm -f /tmp/spdk-ld-path 00:01:52.174 + source autorun-spdk.conf 00:01:52.174 ++ SPDK_TEST_UNITTEST=1 00:01:52.174 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.174 ++ SPDK_TEST_NVME=1 00:01:52.174 ++ SPDK_TEST_BLOCKDEV=1 00:01:52.174 ++ SPDK_RUN_ASAN=1 00:01:52.174 ++ SPDK_RUN_UBSAN=1 00:01:52.174 ++ SPDK_TEST_RAID5=1 00:01:52.174 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.174 ++ RUN_NIGHTLY=1 00:01:52.174 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.174 + [[ -n '' ]] 00:01:52.174 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.174 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.174 + for M in /var/spdk/build-*-manifest.txt 00:01:52.174 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.174 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.174 + for M in /var/spdk/build-*-manifest.txt 00:01:52.174 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.174 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.174 ++ uname 00:01:52.174 + [[ Linux == \L\i\n\u\x ]] 00:01:52.174 + sudo dmesg -T 00:01:52.174 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.174 + sudo dmesg --clear 00:01:52.174 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.174 + dmesg_pid=2375 00:01:52.174 + [[ Ubuntu == FreeBSD ]] 00:01:52.174 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.174 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.174 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.174 + sudo dmesg -Tw 00:01:52.174 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.174 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.174 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.174 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.174 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:52.174 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:52.174 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:52.174 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.174 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.433 Test configuration: 00:01:52.433 SPDK_TEST_UNITTEST=1 00:01:52.433 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.433 SPDK_TEST_NVME=1 00:01:52.433 SPDK_TEST_BLOCKDEV=1 00:01:52.433 SPDK_RUN_ASAN=1 00:01:52.433 SPDK_RUN_UBSAN=1 00:01:52.433 SPDK_TEST_RAID5=1 00:01:52.433 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.433 RUN_NIGHTLY=1 10:15:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.433 10:15:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.433 10:15:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.433 10:15:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.433 10:15:45 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.433 10:15:45 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.433 10:15:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.433 10:15:45 -- paths/export.sh@5 -- $ export PATH 00:01:52.433 10:15:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.433 10:15:45 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.433 10:15:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:52.433 10:15:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720779345.XXXXXX 00:01:52.433 10:15:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720779345.UgON0u 00:01:52.433 10:15:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:52.433 10:15:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:52.433 10:15:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.433 10:15:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.433 10:15:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.433 10:15:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:52.433 10:15:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:52.433 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.433 10:15:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:52.433 10:15:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.433 10:15:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.433 10:15:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.433 10:15:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.433 Fri Jul 12 10:15:45 UTC 2024 00:01:52.433 10:15:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.433 LTS-59-g4b94202c6 00:01:52.433 10:15:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.433 10:15:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.434 10:15:45 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:52.434 10:15:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.434 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.434 ************************************ 00:01:52.434 START TEST asan 00:01:52.434 ************************************ 00:01:52.434 10:15:45 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:52.434 using asan 00:01:52.434 00:01:52.434 real 0m0.000s 00:01:52.434 user 0m0.000s 00:01:52.434 sys 0m0.000s 00:01:52.434 10:15:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.434 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.434 ************************************ 00:01:52.434 END TEST asan 00:01:52.434 ************************************ 00:01:52.434 10:15:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.434 10:15:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.434 10:15:45 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:52.434 10:15:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.434 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.434 ************************************ 00:01:52.434 START TEST ubsan 00:01:52.434 ************************************ 00:01:52.434 using ubsan 00:01:52.434 10:15:45 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:52.434 00:01:52.434 real 0m0.000s 00:01:52.434 user 0m0.000s 00:01:52.434 sys 0m0.000s 00:01:52.434 10:15:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.434 ************************************ 00:01:52.434 END TEST ubsan 00:01:52.434 ************************************ 00:01:52.434 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.434 10:15:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.434 10:15:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.434 10:15:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.434 10:15:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.434 10:15:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.434 10:15:45 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:52.434 10:15:45 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:52.434 10:15:45 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:52.434 10:15:45 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:52.434 10:15:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.434 10:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.434 ************************************ 00:01:52.434 START TEST unittest_build 00:01:52.434 ************************************ 00:01:52.434 10:15:45 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:52.434 10:15:45 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:52.694 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.694 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.953 Using 'verbs' RDMA provider 00:02:08.391 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:20.592 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:20.851 Creating mk/config.mk...done. 00:02:20.851 Creating mk/cc.flags.mk...done. 00:02:20.851 Type 'make' to build. 00:02:20.851 10:16:14 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:21.109 make[1]: Nothing to be done for 'all'. 00:02:23.010 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.267 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.524 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.524 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.524 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.524 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.781 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.781 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.781 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.781 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.781 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.039 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.039 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.039 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.039 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.039 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.070 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.584 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.584 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.584 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.584 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.841 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.841 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.841 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.841 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.356 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.613 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.871 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.128 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.387 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.645 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.645 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.645 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.645 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.645 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.903 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.903 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.903 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.903 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.160 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.160 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.160 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.160 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.418 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.676 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.934 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.934 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.934 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.934 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.192 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.192 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.192 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.192 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.707 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.965 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.238 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.535 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.793 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.793 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.793 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.050 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.084 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.084 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.084 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.084 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.342 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.342 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.342 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.600 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.858 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.117 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.633 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.633 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.891 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.407 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.973 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.973 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.973 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.231 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.231 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.489 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.489 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.003 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.003 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.003 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.003 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.003 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.261 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.262 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.262 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.262 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.262 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.262 The Meson build system 00:02:36.262 Version: 1.4.0 00:02:36.262 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:36.262 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:36.262 Build type: native build 00:02:36.262 Program cat found: YES (/usr/bin/cat) 00:02:36.262 Project name: DPDK 00:02:36.262 Project version: 23.11.0 00:02:36.262 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:36.262 C linker for the host machine: cc ld.bfd 2.34 00:02:36.262 Host machine cpu family: x86_64 00:02:36.262 Host machine cpu: x86_64 00:02:36.262 Message: ## Building in Developer Mode ## 00:02:36.262 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.262 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:36.262 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.262 Program python3 found: YES (/usr/bin/python3) 00:02:36.262 Program cat found: YES (/usr/bin/cat) 00:02:36.262 Compiler for C supports arguments -march=native: YES 00:02:36.262 Checking for size of "void *" : 8 00:02:36.262 Checking for size of "void *" : 8 (cached) 00:02:36.262 Library m found: YES 00:02:36.262 Library numa found: YES 00:02:36.262 Has header "numaif.h" : YES 00:02:36.262 Library fdt found: NO 00:02:36.262 Library execinfo found: NO 00:02:36.262 Has header "execinfo.h" : YES 00:02:36.262 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:36.262 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.262 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.262 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.262 Run-time dependency openssl found: YES 1.1.1f 00:02:36.262 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:36.262 Library pcap found: NO 00:02:36.262 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.262 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.262 Compiler for C supports arguments -Wformat: YES 00:02:36.262 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:36.262 Compiler for C supports arguments -Wformat-security: YES 00:02:36.262 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.262 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.262 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.262 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.262 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.262 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.262 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.262 Compiler for C supports arguments -Wundef: YES 00:02:36.262 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.262 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.262 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.262 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.262 Program objdump found: YES (/usr/bin/objdump) 00:02:36.262 Compiler for C supports arguments -mavx512f: YES 00:02:36.262 Checking if "AVX512 checking" compiles: YES 00:02:36.262 Fetching value of define "__SSE4_2__" : 1 00:02:36.262 Fetching value of define "__AES__" : 1 00:02:36.262 Fetching value of define "__AVX__" : 1 00:02:36.262 Fetching value of define "__AVX2__" : 1 00:02:36.262 Fetching value of define "__AVX512BW__" : (undefined) 00:02:36.262 Fetching value of define "__AVX512CD__" : (undefined) 00:02:36.262 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:36.262 Fetching value of define "__AVX512F__" : (undefined) 00:02:36.262 Fetching value of define "__AVX512VL__" : (undefined) 00:02:36.262 Fetching value of define "__PCLMUL__" : 1 00:02:36.262 Fetching value of define "__RDRND__" : 1 00:02:36.262 Fetching value of define "__RDSEED__" : 1 00:02:36.262 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.262 Fetching value of define "__znver1__" : (undefined) 00:02:36.262 Fetching value of define "__znver2__" : (undefined) 00:02:36.262 Fetching value of define "__znver3__" : (undefined) 00:02:36.262 Fetching value of define "__znver4__" : (undefined) 00:02:36.262 Library asan found: YES 00:02:36.262 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.262 Message: lib/log: Defining dependency "log" 00:02:36.262 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.262 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.262 Library rt found: YES 00:02:36.262 Checking for function "getentropy" : NO 00:02:36.262 Message: lib/eal: Defining dependency "eal" 00:02:36.262 Message: lib/ring: Defining dependency "ring" 00:02:36.262 Message: lib/rcu: Defining dependency "rcu" 00:02:36.262 Message: lib/mempool: Defining dependency "mempool" 00:02:36.262 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.262 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.262 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:36.262 Compiler for C supports arguments -mpclmul: YES 00:02:36.262 Compiler for C supports arguments -maes: YES 00:02:36.262 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.262 Compiler for C supports arguments -mavx512bw: YES 00:02:36.262 Compiler for C supports arguments -mavx512dq: YES 00:02:36.262 Compiler for C supports arguments -mavx512vl: YES 00:02:36.262 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.262 Compiler for C supports arguments -mavx2: YES 00:02:36.262 Compiler for C supports arguments -mavx: YES 00:02:36.262 Message: lib/net: Defining dependency "net" 00:02:36.262 Message: lib/meter: Defining dependency "meter" 00:02:36.262 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.262 Message: lib/pci: Defining dependency "pci" 00:02:36.262 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.262 Message: lib/hash: Defining dependency "hash" 00:02:36.262 Message: lib/timer: Defining dependency "timer" 00:02:36.262 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.262 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.262 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.262 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.262 Message: lib/power: Defining dependency "power" 00:02:36.262 Message: lib/reorder: Defining dependency "reorder" 00:02:36.262 Message: lib/security: Defining dependency "security" 00:02:36.262 Has header "linux/userfaultfd.h" : YES 00:02:36.262 Has header "linux/vduse.h" : NO 00:02:36.262 Message: lib/vhost: Defining dependency "vhost" 00:02:36.262 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.262 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.262 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.262 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.262 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:36.262 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:36.262 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:36.262 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:36.262 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:36.262 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:36.262 Program doxygen found: YES (/usr/bin/doxygen) 00:02:36.262 Configuring doxy-api-html.conf using configuration 00:02:36.262 Configuring doxy-api-man.conf using configuration 00:02:36.262 Program mandb found: YES (/usr/bin/mandb) 00:02:36.262 Program sphinx-build found: NO 00:02:36.262 Configuring rte_build_config.h using configuration 00:02:36.262 Message: 00:02:36.262 ================= 00:02:36.262 Applications Enabled 00:02:36.262 ================= 00:02:36.262 00:02:36.262 apps: 00:02:36.262 00:02:36.262 00:02:36.262 Message: 00:02:36.262 ================= 00:02:36.262 Libraries Enabled 00:02:36.262 ================= 00:02:36.262 00:02:36.262 libs: 00:02:36.262 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:36.262 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:36.262 cryptodev, dmadev, power, reorder, security, vhost, 00:02:36.262 00:02:36.262 Message: 00:02:36.262 =============== 00:02:36.262 Drivers Enabled 00:02:36.262 =============== 00:02:36.262 00:02:36.262 common: 00:02:36.262 00:02:36.262 bus: 00:02:36.262 pci, vdev, 00:02:36.262 mempool: 00:02:36.262 ring, 00:02:36.262 dma: 00:02:36.262 00:02:36.262 net: 00:02:36.262 00:02:36.262 crypto: 00:02:36.262 00:02:36.262 compress: 00:02:36.262 00:02:36.262 vdpa: 00:02:36.262 00:02:36.262 00:02:36.262 Message: 00:02:36.262 ================= 00:02:36.262 Content Skipped 00:02:36.262 ================= 00:02:36.262 00:02:36.262 apps: 00:02:36.262 dumpcap: explicitly disabled via build config 00:02:36.262 graph: explicitly disabled via build config 00:02:36.262 pdump: explicitly disabled via build config 00:02:36.262 proc-info: explicitly disabled via build config 00:02:36.262 test-acl: explicitly disabled via build config 00:02:36.262 test-bbdev: explicitly disabled via build config 00:02:36.262 test-cmdline: explicitly disabled via build config 00:02:36.262 test-compress-perf: explicitly disabled via build config 00:02:36.262 test-crypto-perf: explicitly disabled via build config 00:02:36.262 test-dma-perf: explicitly disabled via build config 00:02:36.262 test-eventdev: explicitly disabled via build config 00:02:36.262 test-fib: explicitly disabled via build config 00:02:36.262 test-flow-perf: explicitly disabled via build config 00:02:36.262 test-gpudev: explicitly disabled via build config 00:02:36.262 test-mldev: explicitly disabled via build config 00:02:36.262 test-pipeline: explicitly disabled via build config 00:02:36.262 test-pmd: explicitly disabled via build config 00:02:36.262 test-regex: explicitly disabled via build config 00:02:36.262 test-sad: explicitly disabled via build config 00:02:36.262 test-security-perf: explicitly disabled via build config 00:02:36.262 00:02:36.262 libs: 00:02:36.262 metrics: explicitly disabled via build config 00:02:36.262 acl: explicitly disabled via build config 00:02:36.262 bbdev: explicitly disabled via build config 00:02:36.262 bitratestats: explicitly disabled via build config 00:02:36.262 bpf: explicitly disabled via build config 00:02:36.262 cfgfile: explicitly disabled via build config 00:02:36.262 distributor: explicitly disabled via build config 00:02:36.263 efd: explicitly disabled via build config 00:02:36.263 eventdev: explicitly disabled via build config 00:02:36.263 dispatcher: explicitly disabled via build config 00:02:36.263 gpudev: explicitly disabled via build config 00:02:36.263 gro: explicitly disabled via build config 00:02:36.263 gso: explicitly disabled via build config 00:02:36.263 ip_frag: explicitly disabled via build config 00:02:36.263 jobstats: explicitly disabled via build config 00:02:36.263 latencystats: explicitly disabled via build config 00:02:36.263 lpm: explicitly disabled via build config 00:02:36.263 member: explicitly disabled via build config 00:02:36.263 pcapng: explicitly disabled via build config 00:02:36.263 rawdev: explicitly disabled via build config 00:02:36.263 regexdev: explicitly disabled via build config 00:02:36.263 mldev: explicitly disabled via build config 00:02:36.263 rib: explicitly disabled via build config 00:02:36.263 sched: explicitly disabled via build config 00:02:36.263 stack: explicitly disabled via build config 00:02:36.263 ipsec: explicitly disabled via build config 00:02:36.263 pdcp: explicitly disabled via build config 00:02:36.263 fib: explicitly disabled via build config 00:02:36.263 port: explicitly disabled via build config 00:02:36.263 pdump: explicitly disabled via build config 00:02:36.263 table: explicitly disabled via build config 00:02:36.263 pipeline: explicitly disabled via build config 00:02:36.263 graph: explicitly disabled via build config 00:02:36.263 node: explicitly disabled via build config 00:02:36.263 00:02:36.263 drivers: 00:02:36.263 common/cpt: not in enabled drivers build config 00:02:36.263 common/dpaax: not in enabled drivers build config 00:02:36.263 common/iavf: not in enabled drivers build config 00:02:36.263 common/idpf: not in enabled drivers build config 00:02:36.263 common/mvep: not in enabled drivers build config 00:02:36.263 common/octeontx: not in enabled drivers build config 00:02:36.263 bus/auxiliary: not in enabled drivers build config 00:02:36.263 bus/cdx: not in enabled drivers build config 00:02:36.263 bus/dpaa: not in enabled drivers build config 00:02:36.263 bus/fslmc: not in enabled drivers build config 00:02:36.263 bus/ifpga: not in enabled drivers build config 00:02:36.263 bus/platform: not in enabled drivers build config 00:02:36.263 bus/vmbus: not in enabled drivers build config 00:02:36.263 common/cnxk: not in enabled drivers build config 00:02:36.263 common/mlx5: not in enabled drivers build config 00:02:36.263 common/nfp: not in enabled drivers build config 00:02:36.263 common/qat: not in enabled drivers build config 00:02:36.263 common/sfc_efx: not in enabled drivers build config 00:02:36.263 mempool/bucket: not in enabled drivers build config 00:02:36.263 mempool/cnxk: not in enabled drivers build config 00:02:36.263 mempool/dpaa: not in enabled drivers build config 00:02:36.263 mempool/dpaa2: not in enabled drivers build config 00:02:36.263 mempool/octeontx: not in enabled drivers build config 00:02:36.263 mempool/stack: not in enabled drivers build config 00:02:36.263 dma/cnxk: not in enabled drivers build config 00:02:36.263 dma/dpaa: not in enabled drivers build config 00:02:36.263 dma/dpaa2: not in enabled drivers build config 00:02:36.263 dma/hisilicon: not in enabled drivers build config 00:02:36.263 dma/idxd: not in enabled drivers build config 00:02:36.263 dma/ioat: not in enabled drivers build config 00:02:36.263 dma/skeleton: not in enabled drivers build config 00:02:36.263 net/af_packet: not in enabled drivers build config 00:02:36.263 net/af_xdp: not in enabled drivers build config 00:02:36.263 net/ark: not in enabled drivers build config 00:02:36.263 net/atlantic: not in enabled drivers build config 00:02:36.263 net/avp: not in enabled drivers build config 00:02:36.263 net/axgbe: not in enabled drivers build config 00:02:36.263 net/bnx2x: not in enabled drivers build config 00:02:36.263 net/bnxt: not in enabled drivers build config 00:02:36.263 net/bonding: not in enabled drivers build config 00:02:36.263 net/cnxk: not in enabled drivers build config 00:02:36.263 net/cpfl: not in enabled drivers build config 00:02:36.263 net/cxgbe: not in enabled drivers build config 00:02:36.263 net/dpaa: not in enabled drivers build config 00:02:36.263 net/dpaa2: not in enabled drivers build config 00:02:36.263 net/e1000: not in enabled drivers build config 00:02:36.263 net/ena: not in enabled drivers build config 00:02:36.263 net/enetc: not in enabled drivers build config 00:02:36.263 net/enetfec: not in enabled drivers build config 00:02:36.263 net/enic: not in enabled drivers build config 00:02:36.263 net/failsafe: not in enabled drivers build config 00:02:36.263 net/fm10k: not in enabled drivers build config 00:02:36.263 net/gve: not in enabled drivers build config 00:02:36.263 net/hinic: not in enabled drivers build config 00:02:36.263 net/hns3: not in enabled drivers build config 00:02:36.263 net/i40e: not in enabled drivers build config 00:02:36.263 net/iavf: not in enabled drivers build config 00:02:36.263 net/ice: not in enabled drivers build config 00:02:36.263 net/idpf: not in enabled drivers build config 00:02:36.263 net/igc: not in enabled drivers build config 00:02:36.263 net/ionic: not in enabled drivers build config 00:02:36.263 net/ipn3ke: not in enabled drivers build config 00:02:36.263 net/ixgbe: not in enabled drivers build config 00:02:36.263 net/mana: not in enabled drivers build config 00:02:36.263 net/memif: not in enabled drivers build config 00:02:36.263 net/mlx4: not in enabled drivers build config 00:02:36.263 net/mlx5: not in enabled drivers build config 00:02:36.263 net/mvneta: not in enabled drivers build config 00:02:36.263 net/mvpp2: not in enabled drivers build config 00:02:36.263 net/netvsc: not in enabled drivers build config 00:02:36.263 net/nfb: not in enabled drivers build config 00:02:36.263 net/nfp: not in enabled drivers build config 00:02:36.263 net/ngbe: not in enabled drivers build config 00:02:36.263 net/null: not in enabled drivers build config 00:02:36.263 net/octeontx: not in enabled drivers build config 00:02:36.263 net/octeon_ep: not in enabled drivers build config 00:02:36.263 net/pcap: not in enabled drivers build config 00:02:36.263 net/pfe: not in enabled drivers build config 00:02:36.263 net/qede: not in enabled drivers build config 00:02:36.263 net/ring: not in enabled drivers build config 00:02:36.263 net/sfc: not in enabled drivers build config 00:02:36.263 net/softnic: not in enabled drivers build config 00:02:36.263 net/tap: not in enabled drivers build config 00:02:36.263 net/thunderx: not in enabled drivers build config 00:02:36.263 net/txgbe: not in enabled drivers build config 00:02:36.263 net/vdev_netvsc: not in enabled drivers build config 00:02:36.263 net/vhost: not in enabled drivers build config 00:02:36.263 net/virtio: not in enabled drivers build config 00:02:36.263 net/vmxnet3: not in enabled drivers build config 00:02:36.263 raw/*: missing internal dependency, "rawdev" 00:02:36.263 crypto/armv8: not in enabled drivers build config 00:02:36.263 crypto/bcmfs: not in enabled drivers build config 00:02:36.263 crypto/caam_jr: not in enabled drivers build config 00:02:36.263 crypto/ccp: not in enabled drivers build config 00:02:36.263 crypto/cnxk: not in enabled drivers build config 00:02:36.263 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.263 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.263 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.263 crypto/mlx5: not in enabled drivers build config 00:02:36.263 crypto/mvsam: not in enabled drivers build config 00:02:36.263 crypto/nitrox: not in enabled drivers build config 00:02:36.263 crypto/null: not in enabled drivers build config 00:02:36.263 crypto/octeontx: not in enabled drivers build config 00:02:36.263 crypto/openssl: not in enabled drivers build config 00:02:36.263 crypto/scheduler: not in enabled drivers build config 00:02:36.263 crypto/uadk: not in enabled drivers build config 00:02:36.263 crypto/virtio: not in enabled drivers build config 00:02:36.263 compress/isal: not in enabled drivers build config 00:02:36.263 compress/mlx5: not in enabled drivers build config 00:02:36.263 compress/octeontx: not in enabled drivers build config 00:02:36.263 compress/zlib: not in enabled drivers build config 00:02:36.263 regex/*: missing internal dependency, "regexdev" 00:02:36.263 ml/*: missing internal dependency, "mldev" 00:02:36.263 vdpa/ifc: not in enabled drivers build config 00:02:36.263 vdpa/mlx5: not in enabled drivers build config 00:02:36.263 vdpa/nfp: not in enabled drivers build config 00:02:36.263 vdpa/sfc: not in enabled drivers build config 00:02:36.263 event/*: missing internal dependency, "eventdev" 00:02:36.263 baseband/*: missing internal dependency, "bbdev" 00:02:36.263 gpu/*: missing internal dependency, "gpudev" 00:02:36.263 00:02:36.263 00:02:36.534 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.534 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.534 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.534 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.791 Build targets in project: 85 00:02:36.791 00:02:36.791 DPDK 23.11.0 00:02:36.791 00:02:36.791 User defined options 00:02:36.791 buildtype : debug 00:02:36.791 default_library : static 00:02:36.791 libdir : lib 00:02:36.791 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:36.791 b_sanitize : address 00:02:36.791 c_args : -fPIC -Werror 00:02:36.791 c_link_args : 00:02:36.791 cpu_instruction_set: native 00:02:36.791 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:36.791 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:36.791 enable_docs : false 00:02:36.791 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.791 enable_kmods : false 00:02:36.791 tests : false 00:02:36.791 00:02:36.791 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.791 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.049 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.049 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.049 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.325 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.326 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.326 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.326 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.326 [3/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.326 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.326 [5/264] Linking static target lib/librte_kvargs.a 00:02:37.326 [6/264] Linking static target lib/librte_log.a 00:02:37.326 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.326 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.582 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.582 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.582 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.582 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.582 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.582 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.837 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.838 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.838 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.838 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.838 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.838 [17/264] Linking static target lib/librte_telemetry.a 00:02:37.838 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.140 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.140 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.140 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.140 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.140 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.140 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.140 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.140 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.397 [25/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.397 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.397 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.397 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.655 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.655 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.655 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.655 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.655 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.655 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.655 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.655 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.655 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.655 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.655 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.655 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.912 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.912 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.912 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.912 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.912 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.912 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.912 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.912 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.912 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.170 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.170 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.170 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.170 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:39.170 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.170 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.170 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.170 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.170 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.170 [51/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.170 [52/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.170 [53/264] Linking target lib/librte_log.so.24.0 00:02:39.426 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.426 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.426 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.426 [57/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.426 [58/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:39.426 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.426 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.426 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.426 [62/264] Linking target lib/librte_kvargs.so.24.0 00:02:39.426 [63/264] Linking target lib/librte_telemetry.so.24.0 00:02:39.426 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.426 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.426 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.683 [66/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.683 [67/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:39.683 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.683 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.683 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.683 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.683 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.683 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.683 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.683 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.683 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.683 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.683 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.941 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.941 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.941 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.941 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.941 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.941 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.199 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.199 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.199 [86/264] Linking static target lib/librte_ring.a 00:02:40.199 [87/264] Linking static target lib/librte_eal.a 00:02:40.199 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.199 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.458 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.458 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.458 [92/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.458 [93/264] Linking static target lib/librte_rcu.a 00:02:40.458 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.458 [95/264] Linking static target lib/librte_mempool.a 00:02:40.458 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.458 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.732 [98/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.732 [99/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.732 [100/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.732 [101/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.732 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.732 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.732 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.991 [105/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.991 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.991 [107/264] Linking static target lib/librte_net.a 00:02:40.991 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.991 [109/264] Linking static target lib/librte_meter.a 00:02:40.991 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.991 [111/264] Linking static target lib/librte_mbuf.a 00:02:40.991 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.249 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.249 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.249 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.249 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.249 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.249 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.507 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.507 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.765 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.765 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.765 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.765 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:41.765 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.765 [126/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.765 [127/264] Linking static target lib/librte_pci.a 00:02:41.765 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.023 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.023 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.023 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.023 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.023 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.023 [134/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.023 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.023 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.023 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.023 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.023 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.023 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.281 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.282 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.282 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.282 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.282 [145/264] Linking static target lib/librte_cmdline.a 00:02:42.540 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.540 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.540 [148/264] Linking static target lib/librte_timer.a 00:02:42.540 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.540 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.540 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.540 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.799 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.799 [154/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.059 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.059 [156/264] Linking static target lib/librte_compressdev.a 00:02:43.059 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.059 [158/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.059 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.059 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.059 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.059 [162/264] Linking static target lib/librte_hash.a 00:02:43.316 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.316 [164/264] Linking static target lib/librte_dmadev.a 00:02:43.316 [165/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.316 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:43.316 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.316 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:43.316 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.316 [170/264] Linking static target lib/librte_ethdev.a 00:02:43.573 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:43.573 [172/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.573 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.573 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:43.832 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:43.832 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:43.832 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.832 [178/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.832 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.090 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.090 [181/264] Linking static target lib/librte_power.a 00:02:44.090 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.090 [183/264] Linking static target lib/librte_cryptodev.a 00:02:44.090 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.090 [185/264] Linking static target lib/librte_reorder.a 00:02:44.090 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.090 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.348 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.348 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.348 [190/264] Linking static target lib/librte_security.a 00:02:44.348 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.606 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.865 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.865 [194/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.865 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.865 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.125 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.125 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.125 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.125 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.125 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.125 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.383 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.383 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.383 [205/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.383 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.383 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.641 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.641 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.641 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.641 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:45.641 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.641 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.641 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.641 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:45.641 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.641 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.899 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.899 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.899 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.899 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:45.899 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.157 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.531 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.531 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.790 [226/264] Linking target lib/librte_eal.so.24.0 00:02:47.790 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:47.790 [228/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:47.790 [229/264] Linking target lib/librte_meter.so.24.0 00:02:47.790 [230/264] Linking target lib/librte_dmadev.so.24.0 00:02:47.790 [231/264] Linking target lib/librte_ring.so.24.0 00:02:47.790 [232/264] Linking target lib/librte_pci.so.24.0 00:02:47.790 [233/264] Linking target lib/librte_timer.so.24.0 00:02:48.048 [234/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:48.048 [235/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:48.048 [236/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:48.049 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:48.049 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:48.049 [239/264] Linking target lib/librte_mempool.so.24.0 00:02:48.049 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:48.049 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:48.049 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:48.049 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:48.049 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:48.049 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:48.307 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:48.307 [247/264] Linking target lib/librte_compressdev.so.24.0 00:02:48.307 [248/264] Linking target lib/librte_net.so.24.0 00:02:48.307 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:48.307 [250/264] Linking target lib/librte_reorder.so.24.0 00:02:48.307 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:48.567 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:48.567 [253/264] Linking target lib/librte_hash.so.24.0 00:02:48.567 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:48.567 [255/264] Linking target lib/librte_security.so.24.0 00:02:48.567 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:49.502 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.502 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:49.502 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:49.502 [260/264] Linking target lib/librte_power.so.24.0 00:02:50.904 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.904 [262/264] Linking static target lib/librte_vhost.a 00:02:52.805 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.805 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:52.805 INFO: autodetecting backend as ninja 00:02:52.805 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.736 CC lib/ut_mock/mock.o 00:02:53.736 CC lib/log/log.o 00:02:53.736 CC lib/log/log_flags.o 00:02:53.736 CC lib/log/log_deprecated.o 00:02:53.736 CC lib/ut/ut.o 00:02:53.736 LIB libspdk_ut_mock.a 00:02:53.994 LIB libspdk_log.a 00:02:53.994 LIB libspdk_ut.a 00:02:53.994 CC lib/dma/dma.o 00:02:53.994 CXX lib/trace_parser/trace.o 00:02:53.994 CC lib/ioat/ioat.o 00:02:53.994 CC lib/util/bit_array.o 00:02:53.994 CC lib/util/base64.o 00:02:53.994 CC lib/util/cpuset.o 00:02:53.994 CC lib/util/crc16.o 00:02:53.994 CC lib/util/crc32.o 00:02:53.994 CC lib/util/crc32c.o 00:02:53.994 CC lib/vfio_user/host/vfio_user_pci.o 00:02:54.251 CC lib/vfio_user/host/vfio_user.o 00:02:54.251 CC lib/util/crc32_ieee.o 00:02:54.251 CC lib/util/crc64.o 00:02:54.251 LIB libspdk_dma.a 00:02:54.251 CC lib/util/dif.o 00:02:54.251 CC lib/util/fd.o 00:02:54.251 CC lib/util/file.o 00:02:54.251 CC lib/util/hexlify.o 00:02:54.251 CC lib/util/iov.o 00:02:54.251 CC lib/util/math.o 00:02:54.251 CC lib/util/pipe.o 00:02:54.251 LIB libspdk_ioat.a 00:02:54.251 LIB libspdk_vfio_user.a 00:02:54.509 CC lib/util/strerror_tls.o 00:02:54.509 CC lib/util/string.o 00:02:54.509 CC lib/util/uuid.o 00:02:54.509 CC lib/util/fd_group.o 00:02:54.509 CC lib/util/xor.o 00:02:54.509 CC lib/util/zipf.o 00:02:55.076 LIB libspdk_util.a 00:02:55.076 CC lib/json/json_parse.o 00:02:55.076 CC lib/json/json_util.o 00:02:55.076 CC lib/json/json_write.o 00:02:55.076 CC lib/env_dpdk/env.o 00:02:55.076 CC lib/env_dpdk/memory.o 00:02:55.076 CC lib/conf/conf.o 00:02:55.076 CC lib/vmd/vmd.o 00:02:55.076 CC lib/rdma/common.o 00:02:55.076 CC lib/idxd/idxd.o 00:02:55.335 LIB libspdk_trace_parser.a 00:02:55.335 CC lib/idxd/idxd_user.o 00:02:55.335 CC lib/env_dpdk/pci.o 00:02:55.335 CC lib/env_dpdk/init.o 00:02:55.335 LIB libspdk_conf.a 00:02:55.335 LIB libspdk_json.a 00:02:55.335 CC lib/env_dpdk/threads.o 00:02:55.335 CC lib/env_dpdk/pci_ioat.o 00:02:55.335 CC lib/rdma/rdma_verbs.o 00:02:55.593 CC lib/env_dpdk/pci_virtio.o 00:02:55.593 CC lib/env_dpdk/pci_vmd.o 00:02:55.593 LIB libspdk_rdma.a 00:02:55.593 CC lib/vmd/led.o 00:02:55.593 CC lib/env_dpdk/pci_idxd.o 00:02:55.593 CC lib/env_dpdk/pci_event.o 00:02:55.593 CC lib/env_dpdk/sigbus_handler.o 00:02:55.593 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.850 CC lib/env_dpdk/pci_dpdk.o 00:02:55.850 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.850 LIB libspdk_idxd.a 00:02:55.850 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.850 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.850 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.850 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.850 LIB libspdk_vmd.a 00:02:56.108 LIB libspdk_jsonrpc.a 00:02:56.108 CC lib/rpc/rpc.o 00:02:56.366 LIB libspdk_rpc.a 00:02:56.623 CC lib/trace/trace.o 00:02:56.623 CC lib/trace/trace_rpc.o 00:02:56.623 CC lib/trace/trace_flags.o 00:02:56.623 CC lib/notify/notify.o 00:02:56.623 CC lib/notify/notify_rpc.o 00:02:56.623 CC lib/sock/sock_rpc.o 00:02:56.623 CC lib/sock/sock.o 00:02:56.623 LIB libspdk_notify.a 00:02:56.623 LIB libspdk_env_dpdk.a 00:02:56.881 LIB libspdk_trace.a 00:02:56.881 CC lib/thread/thread.o 00:02:56.881 CC lib/thread/iobuf.o 00:02:56.881 LIB libspdk_sock.a 00:02:57.139 CC lib/nvme/nvme_ctrlr.o 00:02:57.139 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.139 CC lib/nvme/nvme_fabric.o 00:02:57.139 CC lib/nvme/nvme_pcie.o 00:02:57.139 CC lib/nvme/nvme_ns_cmd.o 00:02:57.139 CC lib/nvme/nvme_ns.o 00:02:57.139 CC lib/nvme/nvme_pcie_common.o 00:02:57.139 CC lib/nvme/nvme_qpair.o 00:02:57.396 CC lib/nvme/nvme.o 00:02:57.654 CC lib/nvme/nvme_quirks.o 00:02:57.654 CC lib/nvme/nvme_transport.o 00:02:57.912 CC lib/nvme/nvme_discovery.o 00:02:57.912 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.912 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.912 CC lib/nvme/nvme_tcp.o 00:02:58.169 CC lib/nvme/nvme_opal.o 00:02:58.169 CC lib/nvme/nvme_io_msg.o 00:02:58.169 CC lib/nvme/nvme_poll_group.o 00:02:58.169 CC lib/nvme/nvme_zns.o 00:02:58.169 CC lib/nvme/nvme_cuse.o 00:02:58.432 CC lib/nvme/nvme_vfio_user.o 00:02:58.432 CC lib/nvme/nvme_rdma.o 00:02:58.695 LIB libspdk_thread.a 00:02:58.953 CC lib/blob/blobstore.o 00:02:58.953 CC lib/blob/request.o 00:02:58.953 CC lib/init/json_config.o 00:02:58.953 CC lib/blob/zeroes.o 00:02:58.953 CC lib/virtio/virtio.o 00:02:58.953 CC lib/accel/accel.o 00:02:59.210 CC lib/accel/accel_rpc.o 00:02:59.210 CC lib/accel/accel_sw.o 00:02:59.210 CC lib/init/subsystem.o 00:02:59.210 CC lib/virtio/virtio_vhost_user.o 00:02:59.210 CC lib/blob/blob_bs_dev.o 00:02:59.210 CC lib/virtio/virtio_vfio_user.o 00:02:59.466 CC lib/virtio/virtio_pci.o 00:02:59.466 CC lib/init/subsystem_rpc.o 00:02:59.466 CC lib/init/rpc.o 00:02:59.723 LIB libspdk_init.a 00:02:59.723 LIB libspdk_virtio.a 00:02:59.723 CC lib/event/app.o 00:02:59.723 CC lib/event/reactor.o 00:02:59.723 CC lib/event/log_rpc.o 00:02:59.723 CC lib/event/scheduler_static.o 00:02:59.723 CC lib/event/app_rpc.o 00:02:59.723 LIB libspdk_nvme.a 00:03:00.288 LIB libspdk_accel.a 00:03:00.288 LIB libspdk_event.a 00:03:00.288 CC lib/bdev/bdev.o 00:03:00.288 CC lib/bdev/bdev_rpc.o 00:03:00.288 CC lib/bdev/bdev_zone.o 00:03:00.288 CC lib/bdev/scsi_nvme.o 00:03:00.288 CC lib/bdev/part.o 00:03:02.812 LIB libspdk_blob.a 00:03:02.812 CC lib/blobfs/tree.o 00:03:02.812 CC lib/blobfs/blobfs.o 00:03:02.812 CC lib/lvol/lvol.o 00:03:03.376 LIB libspdk_bdev.a 00:03:03.633 CC lib/scsi/dev.o 00:03:03.633 CC lib/scsi/port.o 00:03:03.633 CC lib/scsi/lun.o 00:03:03.633 CC lib/nbd/nbd.o 00:03:03.633 CC lib/scsi/scsi_bdev.o 00:03:03.633 CC lib/nvmf/ctrlr.o 00:03:03.633 CC lib/scsi/scsi.o 00:03:03.633 CC lib/ftl/ftl_core.o 00:03:03.633 LIB libspdk_blobfs.a 00:03:03.633 CC lib/ftl/ftl_init.o 00:03:03.633 LIB libspdk_lvol.a 00:03:03.633 CC lib/nvmf/ctrlr_discovery.o 00:03:03.633 CC lib/nbd/nbd_rpc.o 00:03:03.633 CC lib/nvmf/ctrlr_bdev.o 00:03:03.891 CC lib/nvmf/subsystem.o 00:03:03.891 CC lib/nvmf/nvmf.o 00:03:03.891 CC lib/nvmf/nvmf_rpc.o 00:03:03.891 CC lib/nvmf/transport.o 00:03:03.891 CC lib/ftl/ftl_layout.o 00:03:03.891 LIB libspdk_nbd.a 00:03:04.148 CC lib/nvmf/tcp.o 00:03:04.148 CC lib/scsi/scsi_pr.o 00:03:04.148 CC lib/nvmf/rdma.o 00:03:04.406 CC lib/ftl/ftl_debug.o 00:03:04.406 CC lib/scsi/scsi_rpc.o 00:03:04.663 CC lib/ftl/ftl_io.o 00:03:04.663 CC lib/ftl/ftl_sb.o 00:03:04.663 CC lib/scsi/task.o 00:03:04.663 CC lib/ftl/ftl_l2p.o 00:03:04.663 CC lib/ftl/ftl_l2p_flat.o 00:03:04.663 CC lib/ftl/ftl_nv_cache.o 00:03:04.920 CC lib/ftl/ftl_band.o 00:03:04.920 LIB libspdk_scsi.a 00:03:04.920 CC lib/ftl/ftl_band_ops.o 00:03:04.920 CC lib/ftl/ftl_writer.o 00:03:04.920 CC lib/ftl/ftl_rq.o 00:03:05.262 CC lib/iscsi/conn.o 00:03:05.262 CC lib/vhost/vhost.o 00:03:05.262 CC lib/vhost/vhost_rpc.o 00:03:05.262 CC lib/vhost/vhost_scsi.o 00:03:05.262 CC lib/vhost/vhost_blk.o 00:03:05.262 CC lib/vhost/rte_vhost_user.o 00:03:05.519 CC lib/ftl/ftl_reloc.o 00:03:05.776 CC lib/ftl/ftl_l2p_cache.o 00:03:05.776 CC lib/ftl/ftl_p2l.o 00:03:05.776 CC lib/iscsi/init_grp.o 00:03:05.776 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.040 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.040 CC lib/iscsi/iscsi.o 00:03:06.040 CC lib/iscsi/md5.o 00:03:06.040 CC lib/iscsi/param.o 00:03:06.040 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.040 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.040 CC lib/iscsi/portal_grp.o 00:03:06.310 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.310 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.310 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.310 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.310 CC lib/iscsi/tgt_node.o 00:03:06.310 LIB libspdk_vhost.a 00:03:06.310 CC lib/iscsi/iscsi_subsystem.o 00:03:06.310 CC lib/iscsi/iscsi_rpc.o 00:03:06.566 CC lib/iscsi/task.o 00:03:06.566 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.566 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.566 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.566 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.566 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.824 CC lib/ftl/utils/ftl_conf.o 00:03:06.824 CC lib/ftl/utils/ftl_md.o 00:03:06.824 CC lib/ftl/utils/ftl_mempool.o 00:03:06.824 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.824 CC lib/ftl/utils/ftl_property.o 00:03:06.824 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.824 LIB libspdk_nvmf.a 00:03:06.824 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.081 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.081 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.081 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.081 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.081 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.081 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.081 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.081 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.081 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.081 CC lib/ftl/base/ftl_base_dev.o 00:03:07.081 CC lib/ftl/ftl_trace.o 00:03:07.339 LIB libspdk_ftl.a 00:03:07.596 LIB libspdk_iscsi.a 00:03:07.853 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.853 CC module/blob/bdev/blob_bdev.o 00:03:07.853 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.853 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.853 CC module/sock/posix/posix.o 00:03:07.853 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.853 CC module/accel/error/accel_error.o 00:03:07.853 CC module/accel/dsa/accel_dsa.o 00:03:07.853 CC module/accel/ioat/accel_ioat.o 00:03:07.853 CC module/accel/iaa/accel_iaa.o 00:03:08.111 LIB libspdk_env_dpdk_rpc.a 00:03:08.111 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.112 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.112 LIB libspdk_scheduler_gscheduler.a 00:03:08.112 CC module/accel/error/accel_error_rpc.o 00:03:08.112 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.112 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.112 LIB libspdk_scheduler_dynamic.a 00:03:08.112 LIB libspdk_blob_bdev.a 00:03:08.112 LIB libspdk_accel_ioat.a 00:03:08.112 LIB libspdk_accel_iaa.a 00:03:08.112 LIB libspdk_accel_dsa.a 00:03:08.368 LIB libspdk_accel_error.a 00:03:08.368 CC module/bdev/delay/vbdev_delay.o 00:03:08.368 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.368 CC module/bdev/gpt/gpt.o 00:03:08.368 CC module/bdev/nvme/bdev_nvme.o 00:03:08.368 CC module/bdev/error/vbdev_error.o 00:03:08.368 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.368 CC module/bdev/null/bdev_null.o 00:03:08.368 CC module/bdev/malloc/bdev_malloc.o 00:03:08.368 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.625 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.625 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.625 CC module/bdev/null/bdev_null_rpc.o 00:03:08.625 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.625 LIB libspdk_sock_posix.a 00:03:08.625 LIB libspdk_blobfs_bdev.a 00:03:08.625 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.625 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.883 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.883 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.883 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.883 LIB libspdk_bdev_null.a 00:03:08.883 LIB libspdk_bdev_error.a 00:03:08.883 LIB libspdk_bdev_gpt.a 00:03:08.883 CC module/bdev/nvme/nvme_rpc.o 00:03:08.883 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.883 LIB libspdk_bdev_delay.a 00:03:08.883 LIB libspdk_bdev_malloc.a 00:03:08.883 CC module/bdev/nvme/vbdev_opal.o 00:03:08.883 CC module/bdev/raid/bdev_raid.o 00:03:08.883 LIB libspdk_bdev_passthru.a 00:03:08.883 CC module/bdev/split/vbdev_split.o 00:03:09.140 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.140 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.140 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.140 LIB libspdk_bdev_lvol.a 00:03:09.140 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.140 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.140 CC module/bdev/raid/raid0.o 00:03:09.140 LIB libspdk_bdev_split.a 00:03:09.140 CC module/bdev/raid/raid1.o 00:03:09.398 CC module/bdev/raid/concat.o 00:03:09.398 CC module/bdev/raid/raid5f.o 00:03:09.398 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.656 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.656 CC module/bdev/aio/bdev_aio.o 00:03:09.656 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.656 LIB libspdk_bdev_zone_block.a 00:03:09.656 CC module/bdev/ftl/bdev_ftl.o 00:03:09.656 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.656 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.656 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.656 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.656 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.913 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.913 LIB libspdk_bdev_aio.a 00:03:09.913 LIB libspdk_bdev_ftl.a 00:03:09.913 LIB libspdk_bdev_raid.a 00:03:10.171 LIB libspdk_bdev_iscsi.a 00:03:10.171 LIB libspdk_bdev_virtio.a 00:03:11.104 LIB libspdk_bdev_nvme.a 00:03:11.363 CC module/event/subsystems/vmd/vmd.o 00:03:11.363 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.363 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.363 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.363 CC module/event/subsystems/sock/sock.o 00:03:11.363 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.363 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.363 LIB libspdk_event_vhost_blk.a 00:03:11.363 LIB libspdk_event_sock.a 00:03:11.363 LIB libspdk_event_vmd.a 00:03:11.363 LIB libspdk_event_scheduler.a 00:03:11.621 LIB libspdk_event_iobuf.a 00:03:11.621 CC module/event/subsystems/accel/accel.o 00:03:11.880 LIB libspdk_event_accel.a 00:03:11.880 CC module/event/subsystems/bdev/bdev.o 00:03:12.185 LIB libspdk_event_bdev.a 00:03:12.185 CC module/event/subsystems/scsi/scsi.o 00:03:12.185 CC module/event/subsystems/nbd/nbd.o 00:03:12.185 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.185 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.443 LIB libspdk_event_nbd.a 00:03:12.443 LIB libspdk_event_scsi.a 00:03:12.443 LIB libspdk_event_nvmf.a 00:03:12.443 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.443 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.701 LIB libspdk_event_vhost_scsi.a 00:03:12.701 LIB libspdk_event_iscsi.a 00:03:12.959 CXX app/trace/trace.o 00:03:12.959 CC app/trace_record/trace_record.o 00:03:12.959 CC app/nvmf_tgt/nvmf_main.o 00:03:12.959 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.959 CC examples/accel/perf/accel_perf.o 00:03:12.959 CC test/bdev/bdevio/bdevio.o 00:03:12.959 CC app/spdk_tgt/spdk_tgt.o 00:03:12.959 CC test/blobfs/mkfs/mkfs.o 00:03:12.959 CC test/accel/dif/dif.o 00:03:12.959 CC test/app/bdev_svc/bdev_svc.o 00:03:13.217 LINK nvmf_tgt 00:03:13.217 LINK spdk_trace_record 00:03:13.217 LINK bdev_svc 00:03:13.217 LINK iscsi_tgt 00:03:13.217 LINK spdk_tgt 00:03:13.217 LINK mkfs 00:03:13.474 LINK spdk_trace 00:03:13.474 LINK bdevio 00:03:13.474 LINK dif 00:03:13.474 LINK accel_perf 00:03:13.732 CC app/spdk_lspci/spdk_lspci.o 00:03:13.732 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.989 LINK spdk_lspci 00:03:13.989 LINK hello_bdev 00:03:14.554 CC examples/blob/hello_world/hello_blob.o 00:03:14.812 LINK hello_blob 00:03:14.812 CC examples/blob/cli/blobcli.o 00:03:15.378 LINK blobcli 00:03:15.945 CC examples/ioat/perf/perf.o 00:03:15.945 CC examples/ioat/verify/verify.o 00:03:15.945 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.945 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.204 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.204 LINK ioat_perf 00:03:16.204 CC app/spdk_nvme_perf/perf.o 00:03:16.204 LINK verify 00:03:16.204 CC examples/nvme/hello_world/hello_world.o 00:03:16.204 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.462 LINK nvme_fuzz 00:03:16.462 LINK hello_world 00:03:16.720 LINK vhost_fuzz 00:03:16.720 CC examples/nvme/reconnect/reconnect.o 00:03:16.978 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.978 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.236 LINK reconnect 00:03:17.236 LINK spdk_nvme_perf 00:03:17.503 LINK nvme_manage 00:03:17.503 CC examples/sock/hello_world/hello_sock.o 00:03:17.503 TEST_HEADER include/spdk/accel_module.h 00:03:17.503 TEST_HEADER include/spdk/bit_pool.h 00:03:17.503 TEST_HEADER include/spdk/ioat.h 00:03:17.503 TEST_HEADER include/spdk/blobfs.h 00:03:17.503 TEST_HEADER include/spdk/notify.h 00:03:17.503 TEST_HEADER include/spdk/pipe.h 00:03:17.503 TEST_HEADER include/spdk/accel.h 00:03:17.503 TEST_HEADER include/spdk/file.h 00:03:17.503 TEST_HEADER include/spdk/version.h 00:03:17.503 TEST_HEADER include/spdk/trace_parser.h 00:03:17.503 TEST_HEADER include/spdk/opal_spec.h 00:03:17.503 TEST_HEADER include/spdk/uuid.h 00:03:17.503 TEST_HEADER include/spdk/likely.h 00:03:17.503 TEST_HEADER include/spdk/dif.h 00:03:17.503 TEST_HEADER include/spdk/memory.h 00:03:17.503 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.503 TEST_HEADER include/spdk/dma.h 00:03:17.776 TEST_HEADER include/spdk/nbd.h 00:03:17.776 TEST_HEADER include/spdk/conf.h 00:03:17.776 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.776 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.776 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.776 TEST_HEADER include/spdk/mmio.h 00:03:17.776 TEST_HEADER include/spdk/json.h 00:03:17.776 TEST_HEADER include/spdk/opal.h 00:03:17.776 TEST_HEADER include/spdk/bdev.h 00:03:17.776 TEST_HEADER include/spdk/base64.h 00:03:17.777 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.777 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.777 TEST_HEADER include/spdk/fd.h 00:03:17.777 TEST_HEADER include/spdk/barrier.h 00:03:17.777 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.777 CC test/dma/test_dma/test_dma.o 00:03:17.777 TEST_HEADER include/spdk/zipf.h 00:03:17.777 TEST_HEADER include/spdk/nvmf.h 00:03:17.777 TEST_HEADER include/spdk/queue.h 00:03:17.777 TEST_HEADER include/spdk/xor.h 00:03:17.777 TEST_HEADER include/spdk/cpuset.h 00:03:17.777 TEST_HEADER include/spdk/thread.h 00:03:17.777 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.777 TEST_HEADER include/spdk/fd_group.h 00:03:17.777 TEST_HEADER include/spdk/tree.h 00:03:17.777 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.777 TEST_HEADER include/spdk/crc64.h 00:03:17.777 TEST_HEADER include/spdk/assert.h 00:03:17.777 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.777 LINK hello_sock 00:03:17.777 TEST_HEADER include/spdk/endian.h 00:03:17.777 TEST_HEADER include/spdk/pci_ids.h 00:03:17.777 TEST_HEADER include/spdk/log.h 00:03:17.777 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.777 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.777 TEST_HEADER include/spdk/ftl.h 00:03:17.777 TEST_HEADER include/spdk/config.h 00:03:17.777 TEST_HEADER include/spdk/vhost.h 00:03:17.777 TEST_HEADER include/spdk/bdev_module.h 00:03:17.777 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.777 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.777 TEST_HEADER include/spdk/crc16.h 00:03:17.777 TEST_HEADER include/spdk/nvme.h 00:03:17.777 TEST_HEADER include/spdk/stdinc.h 00:03:17.777 TEST_HEADER include/spdk/scsi.h 00:03:17.777 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.777 TEST_HEADER include/spdk/idxd.h 00:03:17.777 TEST_HEADER include/spdk/hexlify.h 00:03:17.777 TEST_HEADER include/spdk/reduce.h 00:03:17.777 TEST_HEADER include/spdk/crc32.h 00:03:17.777 TEST_HEADER include/spdk/init.h 00:03:17.777 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.777 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.777 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.777 TEST_HEADER include/spdk/util.h 00:03:17.777 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.777 TEST_HEADER include/spdk/env.h 00:03:17.777 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.777 TEST_HEADER include/spdk/lvol.h 00:03:17.777 TEST_HEADER include/spdk/histogram_data.h 00:03:17.777 TEST_HEADER include/spdk/event.h 00:03:17.777 TEST_HEADER include/spdk/trace.h 00:03:17.777 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.777 TEST_HEADER include/spdk/string.h 00:03:17.777 TEST_HEADER include/spdk/ublk.h 00:03:17.777 TEST_HEADER include/spdk/bit_array.h 00:03:17.777 TEST_HEADER include/spdk/scheduler.h 00:03:17.777 TEST_HEADER include/spdk/blob.h 00:03:17.777 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.777 TEST_HEADER include/spdk/sock.h 00:03:17.777 TEST_HEADER include/spdk/vmd.h 00:03:17.777 TEST_HEADER include/spdk/rpc.h 00:03:17.777 CXX test/cpp_headers/accel_module.o 00:03:17.777 LINK bdevperf 00:03:18.049 CXX test/cpp_headers/bit_pool.o 00:03:18.049 LINK iscsi_fuzz 00:03:18.049 LINK test_dma 00:03:18.307 LINK mem_callbacks 00:03:18.307 CC app/spdk_nvme_identify/identify.o 00:03:18.307 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.307 CXX test/cpp_headers/ioat.o 00:03:18.307 CC test/event/event_perf/event_perf.o 00:03:18.307 CC test/event/reactor/reactor.o 00:03:18.564 LINK event_perf 00:03:18.564 LINK spdk_nvme_discover 00:03:18.564 CC examples/nvme/arbitration/arbitration.o 00:03:18.564 CXX test/cpp_headers/blobfs.o 00:03:18.564 LINK reactor 00:03:18.564 CC test/env/vtophys/vtophys.o 00:03:18.564 CXX test/cpp_headers/notify.o 00:03:18.822 LINK vtophys 00:03:18.822 CXX test/cpp_headers/pipe.o 00:03:18.822 LINK arbitration 00:03:19.080 CXX test/cpp_headers/accel.o 00:03:19.337 CC test/event/reactor_perf/reactor_perf.o 00:03:19.337 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.337 CC test/app/histogram_perf/histogram_perf.o 00:03:19.594 CXX test/cpp_headers/file.o 00:03:19.594 LINK spdk_nvme_identify 00:03:19.594 CC test/nvme/aer/aer.o 00:03:19.594 CC test/lvol/esnap/esnap.o 00:03:19.594 LINK reactor_perf 00:03:19.594 LINK env_dpdk_post_init 00:03:19.594 CXX test/cpp_headers/version.o 00:03:19.594 LINK histogram_perf 00:03:19.852 CXX test/cpp_headers/trace_parser.o 00:03:20.111 CXX test/cpp_headers/opal_spec.o 00:03:20.111 CC examples/nvme/hotplug/hotplug.o 00:03:20.111 LINK aer 00:03:20.369 CC test/app/jsoncat/jsoncat.o 00:03:20.369 CXX test/cpp_headers/uuid.o 00:03:20.369 CC test/event/app_repeat/app_repeat.o 00:03:20.369 LINK hotplug 00:03:20.627 CC app/spdk_top/spdk_top.o 00:03:20.627 CC test/nvme/reset/reset.o 00:03:20.627 LINK jsoncat 00:03:20.627 CXX test/cpp_headers/likely.o 00:03:20.627 LINK app_repeat 00:03:20.627 CC test/env/memory/memory_ut.o 00:03:20.885 CXX test/cpp_headers/dif.o 00:03:20.885 LINK reset 00:03:20.885 CC app/vhost/vhost.o 00:03:20.885 CXX test/cpp_headers/memory.o 00:03:21.142 CXX test/cpp_headers/vfio_user_pci.o 00:03:21.142 LINK vhost 00:03:21.142 CC app/spdk_dd/spdk_dd.o 00:03:21.142 CC test/app/stub/stub.o 00:03:21.401 CXX test/cpp_headers/dma.o 00:03:21.401 LINK stub 00:03:21.401 LINK memory_ut 00:03:21.401 CXX test/cpp_headers/nbd.o 00:03:21.401 CXX test/cpp_headers/conf.o 00:03:21.401 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.658 LINK spdk_top 00:03:21.658 LINK spdk_dd 00:03:21.658 CC test/event/scheduler/scheduler.o 00:03:21.658 CXX test/cpp_headers/env_dpdk.o 00:03:21.658 CC test/env/pci/pci_ut.o 00:03:21.658 LINK cmb_copy 00:03:21.916 CXX test/cpp_headers/nvmf_spec.o 00:03:21.916 LINK scheduler 00:03:21.916 CC test/nvme/sgl/sgl.o 00:03:21.916 CXX test/cpp_headers/iscsi_spec.o 00:03:22.173 CXX test/cpp_headers/mmio.o 00:03:22.173 LINK pci_ut 00:03:22.173 LINK sgl 00:03:22.173 CXX test/cpp_headers/json.o 00:03:22.430 CC app/fio/nvme/fio_plugin.o 00:03:22.430 CXX test/cpp_headers/opal.o 00:03:22.430 CC test/nvme/e2edp/nvme_dp.o 00:03:22.688 CXX test/cpp_headers/bdev.o 00:03:22.688 CC test/nvme/overhead/overhead.o 00:03:22.688 LINK nvme_dp 00:03:22.688 CXX test/cpp_headers/base64.o 00:03:22.945 CC examples/nvme/abort/abort.o 00:03:22.945 LINK overhead 00:03:22.945 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.945 LINK spdk_nvme 00:03:23.202 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.202 LINK abort 00:03:23.202 CC app/fio/bdev/fio_plugin.o 00:03:23.460 CXX test/cpp_headers/fd.o 00:03:23.460 CXX test/cpp_headers/barrier.o 00:03:23.720 CXX test/cpp_headers/scsi_spec.o 00:03:23.720 CXX test/cpp_headers/zipf.o 00:03:23.979 LINK spdk_bdev 00:03:23.979 CXX test/cpp_headers/nvmf.o 00:03:23.979 CC test/rpc_client/rpc_client_test.o 00:03:23.979 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.979 CC test/thread/poller_perf/poller_perf.o 00:03:23.979 CC test/nvme/err_injection/err_injection.o 00:03:23.979 CXX test/cpp_headers/queue.o 00:03:24.237 LINK rpc_client_test 00:03:24.237 CXX test/cpp_headers/xor.o 00:03:24.237 CXX test/cpp_headers/cpuset.o 00:03:24.237 LINK pmr_persistence 00:03:24.237 LINK poller_perf 00:03:24.237 LINK err_injection 00:03:24.237 CXX test/cpp_headers/thread.o 00:03:24.495 CC test/thread/lock/spdk_lock.o 00:03:24.495 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.495 CXX test/cpp_headers/bdev_zone.o 00:03:24.495 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:24.495 LINK lsvmd 00:03:24.754 CXX test/cpp_headers/fd_group.o 00:03:24.754 LINK histogram_ut 00:03:24.754 CC examples/nvmf/nvmf/nvmf.o 00:03:24.754 CXX test/cpp_headers/tree.o 00:03:25.012 CXX test/cpp_headers/blob_bdev.o 00:03:25.012 CC examples/util/zipf/zipf.o 00:03:25.012 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:25.012 LINK nvmf 00:03:25.012 CXX test/cpp_headers/crc64.o 00:03:25.012 LINK zipf 00:03:25.347 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:25.347 CXX test/cpp_headers/assert.o 00:03:25.347 CC test/nvme/startup/startup.o 00:03:25.605 LINK esnap 00:03:25.605 CC examples/vmd/led/led.o 00:03:25.605 CXX test/cpp_headers/nvme_spec.o 00:03:25.605 LINK startup 00:03:25.863 CXX test/cpp_headers/endian.o 00:03:25.863 LINK led 00:03:25.863 CC examples/thread/thread/thread_ex.o 00:03:25.863 CXX test/cpp_headers/pci_ids.o 00:03:26.122 CXX test/cpp_headers/log.o 00:03:26.380 LINK spdk_lock 00:03:26.380 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.638 CXX test/cpp_headers/ftl.o 00:03:26.638 LINK thread 00:03:26.638 CC test/nvme/reserve/reserve.o 00:03:26.638 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:26.896 CC test/nvme/simple_copy/simple_copy.o 00:03:26.896 CXX test/cpp_headers/config.o 00:03:26.896 CXX test/cpp_headers/vhost.o 00:03:26.896 CC test/nvme/connect_stress/connect_stress.o 00:03:26.896 LINK reserve 00:03:27.153 CXX test/cpp_headers/bdev_module.o 00:03:27.154 LINK simple_copy 00:03:27.154 LINK connect_stress 00:03:27.154 CC test/nvme/boot_partition/boot_partition.o 00:03:27.411 LINK blob_bdev_ut 00:03:27.411 CXX test/cpp_headers/nvme_intel.o 00:03:27.669 LINK boot_partition 00:03:27.669 CXX test/cpp_headers/idxd_spec.o 00:03:27.669 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:27.928 CXX test/cpp_headers/crc16.o 00:03:27.928 LINK accel_ut 00:03:27.928 CXX test/cpp_headers/nvme.o 00:03:27.928 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:27.928 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:28.186 CXX test/cpp_headers/stdinc.o 00:03:28.186 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:28.186 CC examples/idxd/perf/perf.o 00:03:28.186 LINK tree_ut 00:03:28.186 CXX test/cpp_headers/scsi.o 00:03:28.186 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.445 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.445 LINK interrupt_tgt 00:03:28.445 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:28.703 CXX test/cpp_headers/idxd.o 00:03:28.703 LINK idxd_perf 00:03:28.703 CC test/nvme/compliance/nvme_compliance.o 00:03:28.703 LINK blobfs_bdev_ut 00:03:28.703 CXX test/cpp_headers/hexlify.o 00:03:28.961 CXX test/cpp_headers/reduce.o 00:03:28.961 LINK nvme_compliance 00:03:28.961 CXX test/cpp_headers/crc32.o 00:03:28.961 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:29.220 CC test/unit/lib/event/app.c/app_ut.o 00:03:29.220 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:29.220 CXX test/cpp_headers/init.o 00:03:29.220 CXX test/cpp_headers/nvmf_transport.o 00:03:29.478 LINK blobfs_async_ut 00:03:29.478 LINK dma_ut 00:03:29.478 LINK blobfs_sync_ut 00:03:29.735 CXX test/cpp_headers/nvme_zns.o 00:03:29.735 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.735 CXX test/cpp_headers/util.o 00:03:29.735 LINK app_ut 00:03:29.735 CXX test/cpp_headers/jsonrpc.o 00:03:29.735 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.993 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.993 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:29.993 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:29.993 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:29.993 CXX test/cpp_headers/env.o 00:03:29.993 LINK reactor_ut 00:03:29.993 LINK fused_ordering 00:03:29.993 LINK doorbell_aers 00:03:30.251 CXX test/cpp_headers/nvmf_cmd.o 00:03:30.251 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:30.251 LINK scsi_nvme_ut 00:03:30.251 CXX test/cpp_headers/lvol.o 00:03:30.251 LINK ioat_ut 00:03:30.510 CXX test/cpp_headers/histogram_data.o 00:03:30.510 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:30.510 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:30.510 CXX test/cpp_headers/event.o 00:03:30.510 LINK gpt_ut 00:03:30.768 CC test/nvme/fdp/fdp.o 00:03:30.768 CXX test/cpp_headers/trace.o 00:03:30.768 CXX test/cpp_headers/ioat_spec.o 00:03:31.026 CXX test/cpp_headers/string.o 00:03:31.026 CXX test/cpp_headers/ublk.o 00:03:31.026 CC test/nvme/cuse/cuse.o 00:03:31.026 LINK fdp 00:03:31.026 CXX test/cpp_headers/bit_array.o 00:03:31.026 LINK bdev_ut 00:03:31.284 CXX test/cpp_headers/scheduler.o 00:03:31.284 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:31.284 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:31.284 CXX test/cpp_headers/blob.o 00:03:31.544 CXX test/cpp_headers/gpt_spec.o 00:03:31.544 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:31.544 LINK vbdev_lvol_ut 00:03:31.544 CXX test/cpp_headers/sock.o 00:03:31.802 CXX test/cpp_headers/vmd.o 00:03:31.802 LINK cuse 00:03:31.802 CXX test/cpp_headers/rpc.o 00:03:31.802 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:32.060 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:32.060 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:32.060 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:32.060 LINK json_util_ut 00:03:32.318 LINK conn_ut 00:03:32.318 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:32.318 LINK bdev_zone_ut 00:03:32.577 LINK init_grp_ut 00:03:32.577 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:32.577 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:32.835 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:32.835 LINK param_ut 00:03:33.092 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:33.092 LINK portal_grp_ut 00:03:33.351 LINK part_ut 00:03:33.351 LINK vbdev_zone_block_ut 00:03:33.609 LINK tgt_node_ut 00:03:33.609 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:33.866 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:33.866 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:33.866 CC test/unit/lib/log/log.c/log_ut.o 00:03:33.866 LINK json_write_ut 00:03:33.866 LINK bdev_raid_ut 00:03:34.124 LINK json_parse_ut 00:03:34.124 LINK log_ut 00:03:34.124 LINK jsonrpc_server_ut 00:03:34.124 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:34.382 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:34.383 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:34.383 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:34.383 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:34.383 LINK bdev_ut 00:03:34.641 LINK notify_ut 00:03:34.641 LINK iscsi_ut 00:03:34.641 LINK bdev_raid_sb_ut 00:03:34.641 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:34.917 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:34.917 LINK concat_ut 00:03:34.917 LINK raid1_ut 00:03:34.917 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:35.174 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:35.174 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:35.174 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:35.432 LINK blob_ut 00:03:35.690 LINK nvme_ut 00:03:35.690 LINK lvol_ut 00:03:35.690 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:35.948 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:35.948 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:36.206 LINK ctrlr_bdev_ut 00:03:36.206 LINK dev_ut 00:03:36.206 LINK raid5f_ut 00:03:36.464 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:36.464 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:36.464 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:36.722 LINK scsi_ut 00:03:36.980 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:36.980 LINK ctrlr_discovery_ut 00:03:37.238 LINK lun_ut 00:03:37.238 LINK subsystem_ut 00:03:37.238 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:37.238 LINK posix_ut 00:03:37.496 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:37.496 LINK sock_ut 00:03:37.496 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:37.496 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:37.755 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:37.755 LINK ctrlr_ut 00:03:38.014 LINK bdev_nvme_ut 00:03:38.273 LINK nvme_ctrlr_cmd_ut 00:03:38.273 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:38.531 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:38.531 LINK scsi_bdev_ut 00:03:38.531 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:38.531 LINK nvmf_ut 00:03:38.531 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:38.789 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:38.789 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:38.789 LINK tcp_ut 00:03:39.047 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:39.047 LINK nvme_ns_ut 00:03:39.047 LINK nvme_ctrlr_ut 00:03:39.304 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:39.304 LINK scsi_pr_ut 00:03:39.304 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:39.304 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:39.304 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:39.561 LINK base64_ut 00:03:39.561 LINK pci_event_ut 00:03:39.819 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:39.819 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:39.819 LINK nvme_ns_ocssd_cmd_ut 00:03:40.077 LINK nvme_poll_group_ut 00:03:40.077 LINK iobuf_ut 00:03:40.077 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:40.077 LINK bit_array_ut 00:03:40.077 LINK nvme_ns_cmd_ut 00:03:40.335 LINK subsystem_ut 00:03:40.335 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:40.335 LINK nvme_pcie_ut 00:03:40.335 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:40.594 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:40.594 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:40.594 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:40.594 LINK cpuset_ut 00:03:40.594 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:40.852 LINK transport_ut 00:03:40.852 LINK crc16_ut 00:03:40.852 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:40.852 LINK idxd_user_ut 00:03:40.852 LINK rpc_ut 00:03:41.110 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:41.110 LINK nvme_quirks_ut 00:03:41.110 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:41.110 LINK crc32_ieee_ut 00:03:41.110 LINK thread_ut 00:03:41.110 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:41.368 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:41.368 LINK rdma_ut 00:03:41.368 LINK nvme_qpair_ut 00:03:41.368 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:41.368 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:41.368 LINK crc32c_ut 00:03:41.626 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:41.626 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:41.626 LINK ftl_l2p_ut 00:03:41.626 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:41.626 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:41.626 LINK common_ut 00:03:41.884 LINK crc64_ut 00:03:41.884 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:41.884 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:41.884 LINK idxd_ut 00:03:41.884 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:42.142 LINK ftl_bitmap_ut 00:03:42.142 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:42.142 LINK nvme_io_msg_ut 00:03:42.400 LINK nvme_transport_ut 00:03:42.400 LINK vhost_ut 00:03:42.400 LINK ftl_io_ut 00:03:42.400 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:42.400 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:42.658 LINK ftl_mempool_ut 00:03:42.658 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:42.658 LINK ftl_band_ut 00:03:42.658 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:42.658 CC test/unit/lib/util/math.c/math_ut.o 00:03:42.916 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:42.916 LINK nvme_pcie_common_ut 00:03:42.916 LINK math_ut 00:03:42.916 CC test/unit/lib/util/string.c/string_ut.o 00:03:43.175 LINK iov_ut 00:03:43.175 LINK ftl_mngt_ut 00:03:43.175 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:43.175 LINK dif_ut 00:03:43.175 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:43.175 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:43.175 LINK pipe_ut 00:03:43.433 LINK nvme_tcp_ut 00:03:43.433 LINK string_ut 00:03:43.433 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:43.433 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:43.433 LINK xor_ut 00:03:43.998 LINK ftl_layout_upgrade_ut 00:03:43.998 LINK nvme_fabric_ut 00:03:43.998 LINK nvme_opal_ut 00:03:44.256 LINK ftl_sb_ut 00:03:45.191 LINK nvme_cuse_ut 00:03:45.448 LINK nvme_rdma_ut 00:03:45.706 00:03:45.706 real 1m53.883s 00:03:45.706 user 9m31.739s 00:03:45.706 sys 1m48.717s 00:03:45.706 ************************************ 00:03:45.706 END TEST unittest_build 00:03:45.706 ************************************ 00:03:45.706 10:17:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:45.706 10:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.706 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:45.706 10:17:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.706 10:17:39 -- nvmf/common.sh@7 -- # uname -s 00:03:45.706 10:17:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.706 10:17:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.706 10:17:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.706 10:17:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.706 10:17:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.706 10:17:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.706 10:17:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.706 10:17:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.706 10:17:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.706 10:17:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.706 10:17:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3688052d-664a-443d-8233-bff1e7acc2e2 00:03:45.706 10:17:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=3688052d-664a-443d-8233-bff1e7acc2e2 00:03:45.706 10:17:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.706 10:17:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.706 10:17:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.706 10:17:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.707 10:17:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.707 10:17:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.707 10:17:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.707 10:17:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.707 10:17:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.707 10:17:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.707 10:17:39 -- paths/export.sh@5 -- # export PATH 00:03:45.707 10:17:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.707 10:17:39 -- nvmf/common.sh@46 -- # : 0 00:03:45.707 10:17:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:45.707 10:17:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:45.707 10:17:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:45.707 10:17:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.707 10:17:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.707 10:17:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:45.707 10:17:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:45.707 10:17:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:45.707 10:17:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.707 10:17:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.707 10:17:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.707 10:17:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:45.707 10:17:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.707 10:17:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.707 10:17:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.707 10:17:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.275 10:17:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.275 10:17:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:46.275 10:17:40 -- spdk/autotest.sh@48 -- # udevadm_pid=93858 00:03:46.275 10:17:40 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:46.275 10:17:40 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:46.275 10:17:40 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:46.275 10:17:40 -- spdk/autotest.sh@54 -- # echo 93904 00:03:46.275 10:17:40 -- spdk/autotest.sh@56 -- # echo 93988 00:03:46.275 10:17:40 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:46.275 10:17:40 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:46.275 10:17:40 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.275 10:17:40 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:46.275 10:17:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:46.275 10:17:40 -- common/autotest_common.sh@10 -- # set +x 00:03:46.275 10:17:40 -- spdk/autotest.sh@70 -- # create_test_list 00:03:46.275 10:17:40 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:46.275 10:17:40 -- common/autotest_common.sh@10 -- # set +x 00:03:46.275 10:17:40 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:46.275 10:17:40 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:46.275 10:17:40 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:46.275 10:17:40 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:46.275 10:17:40 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.275 10:17:40 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:46.275 10:17:40 -- common/autotest_common.sh@1440 -- # uname 00:03:46.275 10:17:40 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:46.275 10:17:40 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:46.275 10:17:40 -- common/autotest_common.sh@1460 -- # uname 00:03:46.275 10:17:40 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:46.275 10:17:40 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:46.275 10:17:40 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:46.275 10:17:40 -- spdk/autotest.sh@83 -- # hash lcov 00:03:46.275 10:17:40 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:46.275 10:17:40 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:46.275 --rc lcov_branch_coverage=1 00:03:46.275 --rc lcov_function_coverage=1 00:03:46.275 --rc genhtml_branch_coverage=1 00:03:46.275 --rc genhtml_function_coverage=1 00:03:46.275 --rc genhtml_legend=1 00:03:46.275 --rc geninfo_all_blocks=1 00:03:46.275 ' 00:03:46.275 10:17:40 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:46.275 --rc lcov_branch_coverage=1 00:03:46.275 --rc lcov_function_coverage=1 00:03:46.275 --rc genhtml_branch_coverage=1 00:03:46.275 --rc genhtml_function_coverage=1 00:03:46.275 --rc genhtml_legend=1 00:03:46.275 --rc geninfo_all_blocks=1 00:03:46.275 ' 00:03:46.275 10:17:40 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:46.275 --rc lcov_branch_coverage=1 00:03:46.275 --rc lcov_function_coverage=1 00:03:46.275 --rc genhtml_branch_coverage=1 00:03:46.275 --rc genhtml_function_coverage=1 00:03:46.275 --rc genhtml_legend=1 00:03:46.275 --rc geninfo_all_blocks=1 00:03:46.275 --no-external' 00:03:46.275 10:17:40 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:46.275 --rc lcov_branch_coverage=1 00:03:46.275 --rc lcov_function_coverage=1 00:03:46.275 --rc genhtml_branch_coverage=1 00:03:46.275 --rc genhtml_function_coverage=1 00:03:46.275 --rc genhtml_legend=1 00:03:46.275 --rc geninfo_all_blocks=1 00:03:46.275 --no-external' 00:03:46.275 10:17:40 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:46.534 lcov: LCOV version 1.15 00:03:46.534 10:17:40 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:48.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:48.705 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.705 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:35.364 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:35.364 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:35.364 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:35.364 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:35.364 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:35.364 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:35.364 10:18:27 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:35.364 10:18:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.364 10:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.364 10:18:27 -- spdk/autotest.sh@102 -- # rm -f 00:04:35.364 10:18:27 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:35.364 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:35.364 10:18:28 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:35.364 10:18:28 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:35.364 10:18:28 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:35.364 10:18:28 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:35.364 10:18:28 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.364 10:18:28 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:35.364 10:18:28 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:35.364 10:18:28 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.364 10:18:28 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.364 10:18:28 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:35.364 10:18:28 -- spdk/autotest.sh@121 -- # grep -v p 00:04:35.364 10:18:28 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:35.364 10:18:28 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.364 10:18:28 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:35.364 10:18:28 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:35.364 10:18:28 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:35.364 10:18:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.364 No valid GPT data, bailing 00:04:35.364 10:18:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.364 10:18:28 -- scripts/common.sh@393 -- # pt= 00:04:35.364 10:18:28 -- scripts/common.sh@394 -- # return 1 00:04:35.364 10:18:28 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.364 1+0 records in 00:04:35.364 1+0 records out 00:04:35.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277444 s, 37.8 MB/s 00:04:35.364 10:18:28 -- spdk/autotest.sh@129 -- # sync 00:04:35.364 10:18:28 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.364 10:18:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.364 10:18:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:35.622 10:18:29 -- spdk/autotest.sh@135 -- # uname -s 00:04:35.622 10:18:29 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:35.622 10:18:29 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:35.622 10:18:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.622 10:18:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.622 10:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:35.622 ************************************ 00:04:35.622 START TEST setup.sh 00:04:35.622 ************************************ 00:04:35.622 10:18:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:35.880 * Looking for test storage... 00:04:35.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:35.880 10:18:29 -- setup/test-setup.sh@10 -- # uname -s 00:04:35.880 10:18:29 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:35.880 10:18:29 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:35.880 10:18:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.881 10:18:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.881 10:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:35.881 ************************************ 00:04:35.881 START TEST acl 00:04:35.881 ************************************ 00:04:35.881 10:18:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:35.881 * Looking for test storage... 00:04:35.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:35.881 10:18:29 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:35.881 10:18:29 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:35.881 10:18:29 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:35.881 10:18:29 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:35.881 10:18:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.881 10:18:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:35.881 10:18:29 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:35.881 10:18:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.881 10:18:29 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.881 10:18:29 -- setup/acl.sh@12 -- # devs=() 00:04:35.881 10:18:29 -- setup/acl.sh@12 -- # declare -a devs 00:04:35.881 10:18:29 -- setup/acl.sh@13 -- # drivers=() 00:04:35.881 10:18:29 -- setup/acl.sh@13 -- # declare -A drivers 00:04:35.881 10:18:29 -- setup/acl.sh@51 -- # setup reset 00:04:35.881 10:18:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.881 10:18:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.447 10:18:30 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:36.447 10:18:30 -- setup/acl.sh@16 -- # local dev driver 00:04:36.447 10:18:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.447 10:18:30 -- setup/acl.sh@15 -- # setup output status 00:04:36.447 10:18:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.447 10:18:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:36.447 Hugepages 00:04:36.447 node hugesize free / total 00:04:36.447 10:18:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:36.447 10:18:30 -- setup/acl.sh@19 -- # continue 00:04:36.447 10:18:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.447 00:04:36.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.447 10:18:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:36.447 10:18:30 -- setup/acl.sh@19 -- # continue 00:04:36.447 10:18:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.447 10:18:30 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:36.447 10:18:30 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:36.447 10:18:30 -- setup/acl.sh@20 -- # continue 00:04:36.447 10:18:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.706 10:18:30 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:36.706 10:18:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:36.706 10:18:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:36.706 10:18:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:36.706 10:18:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:36.706 10:18:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.706 10:18:30 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:36.706 10:18:30 -- setup/acl.sh@54 -- # run_test denied denied 00:04:36.706 10:18:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.706 10:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.706 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.706 ************************************ 00:04:36.706 START TEST denied 00:04:36.706 ************************************ 00:04:36.706 10:18:30 -- common/autotest_common.sh@1104 -- # denied 00:04:36.706 10:18:30 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:36.706 10:18:30 -- setup/acl.sh@38 -- # setup output config 00:04:36.706 10:18:30 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:36.706 10:18:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.706 10:18:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.082 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:38.082 10:18:31 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:38.082 10:18:31 -- setup/acl.sh@28 -- # local dev driver 00:04:38.082 10:18:31 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:38.082 10:18:31 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:38.082 10:18:31 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:38.082 10:18:31 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:38.082 10:18:31 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:38.082 10:18:31 -- setup/acl.sh@41 -- # setup reset 00:04:38.082 10:18:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.082 10:18:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.649 ************************************ 00:04:38.649 END TEST denied 00:04:38.649 ************************************ 00:04:38.649 00:04:38.649 real 0m1.798s 00:04:38.649 user 0m0.523s 00:04:38.649 sys 0m1.322s 00:04:38.649 10:18:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.649 10:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:38.649 10:18:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:38.649 10:18:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.649 10:18:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.649 10:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:38.649 ************************************ 00:04:38.649 START TEST allowed 00:04:38.649 ************************************ 00:04:38.649 10:18:32 -- common/autotest_common.sh@1104 -- # allowed 00:04:38.649 10:18:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:38.649 10:18:32 -- setup/acl.sh@45 -- # setup output config 00:04:38.649 10:18:32 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:38.649 10:18:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.649 10:18:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.032 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.032 10:18:33 -- setup/acl.sh@47 -- # verify 00:04:40.032 10:18:33 -- setup/acl.sh@28 -- # local dev driver 00:04:40.032 10:18:33 -- setup/acl.sh@48 -- # setup reset 00:04:40.032 10:18:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.032 10:18:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.598 00:04:40.598 real 0m1.933s 00:04:40.598 user 0m0.468s 00:04:40.598 sys 0m1.447s 00:04:40.598 10:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.598 ************************************ 00:04:40.598 END TEST allowed 00:04:40.598 ************************************ 00:04:40.598 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.598 00:04:40.598 real 0m4.638s 00:04:40.598 user 0m1.566s 00:04:40.598 sys 0m3.142s 00:04:40.598 10:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.598 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.598 ************************************ 00:04:40.598 END TEST acl 00:04:40.598 ************************************ 00:04:40.598 10:18:34 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:40.598 10:18:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.598 10:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.598 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.598 ************************************ 00:04:40.598 START TEST hugepages 00:04:40.598 ************************************ 00:04:40.598 10:18:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:40.598 * Looking for test storage... 00:04:40.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:40.598 10:18:34 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:40.598 10:18:34 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:40.598 10:18:34 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:40.598 10:18:34 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:40.598 10:18:34 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:40.598 10:18:34 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:40.598 10:18:34 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:40.598 10:18:34 -- setup/common.sh@18 -- # local node= 00:04:40.598 10:18:34 -- setup/common.sh@19 -- # local var val 00:04:40.598 10:18:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.598 10:18:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.598 10:18:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.598 10:18:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.598 10:18:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.598 10:18:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 3089684 kB' 'MemAvailable: 7406128 kB' 'Buffers: 37544 kB' 'Cached: 4404680 kB' 'SwapCached: 0 kB' 'Active: 1200692 kB' 'Inactive: 3368652 kB' 'Active(anon): 136168 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064524 kB' 'Inactive(file): 3366868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'AnonPages: 145900 kB' 'Mapped: 73264 kB' 'Shmem: 2624 kB' 'KReclaimable: 207124 kB' 'Slab: 299636 kB' 'SReclaimable: 207124 kB' 'SUnreclaim: 92512 kB' 'KernelStack: 4716 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028392 kB' 'Committed_AS: 632956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.598 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.598 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # continue 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.599 10:18:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.599 10:18:34 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.599 10:18:34 -- setup/common.sh@33 -- # echo 2048 00:04:40.599 10:18:34 -- setup/common.sh@33 -- # return 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:40.599 10:18:34 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:40.599 10:18:34 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:40.599 10:18:34 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:40.599 10:18:34 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:40.599 10:18:34 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:40.599 10:18:34 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:40.599 10:18:34 -- setup/hugepages.sh@207 -- # get_nodes 00:04:40.599 10:18:34 -- setup/hugepages.sh@27 -- # local node 00:04:40.599 10:18:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.599 10:18:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:40.599 10:18:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.599 10:18:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.599 10:18:34 -- setup/hugepages.sh@208 -- # clear_hp 00:04:40.599 10:18:34 -- setup/hugepages.sh@37 -- # local node hp 00:04:40.599 10:18:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.599 10:18:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.599 10:18:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.599 10:18:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.599 10:18:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.599 10:18:34 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:40.599 10:18:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.599 10:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.599 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.599 ************************************ 00:04:40.599 START TEST default_setup 00:04:40.599 ************************************ 00:04:40.599 10:18:34 -- common/autotest_common.sh@1104 -- # default_setup 00:04:40.599 10:18:34 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.599 10:18:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.599 10:18:34 -- setup/hugepages.sh@51 -- # shift 00:04:40.599 10:18:34 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:40.599 10:18:34 -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.599 10:18:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.599 10:18:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.599 10:18:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:40.599 10:18:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.599 10:18:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.599 10:18:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.599 10:18:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.599 10:18:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.599 10:18:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.599 10:18:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.599 10:18:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.599 10:18:34 -- setup/hugepages.sh@73 -- # return 0 00:04:40.599 10:18:34 -- setup/hugepages.sh@137 -- # setup output 00:04:40.599 10:18:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.599 10:18:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:41.167 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.105 10:18:35 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:42.105 10:18:35 -- setup/hugepages.sh@89 -- # local node 00:04:42.105 10:18:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.105 10:18:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.105 10:18:35 -- setup/hugepages.sh@92 -- # local surp 00:04:42.105 10:18:35 -- setup/hugepages.sh@93 -- # local resv 00:04:42.105 10:18:35 -- setup/hugepages.sh@94 -- # local anon 00:04:42.105 10:18:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.105 10:18:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.105 10:18:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.105 10:18:35 -- setup/common.sh@18 -- # local node= 00:04:42.105 10:18:35 -- setup/common.sh@19 -- # local var val 00:04:42.105 10:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.105 10:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.105 10:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.105 10:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.105 10:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.105 10:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5183540 kB' 'MemAvailable: 9500056 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210416 kB' 'Inactive: 3368636 kB' 'Active(anon): 145852 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064564 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155544 kB' 'Mapped: 73476 kB' 'Shmem: 2616 kB' 'KReclaimable: 207172 kB' 'Slab: 299720 kB' 'SReclaimable: 207172 kB' 'SUnreclaim: 92548 kB' 'KernelStack: 4608 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 642576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.105 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.105 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.106 10:18:35 -- setup/common.sh@33 -- # echo 0 00:04:42.106 10:18:35 -- setup/common.sh@33 -- # return 0 00:04:42.106 10:18:35 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.106 10:18:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.106 10:18:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.106 10:18:35 -- setup/common.sh@18 -- # local node= 00:04:42.106 10:18:35 -- setup/common.sh@19 -- # local var val 00:04:42.106 10:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.106 10:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.106 10:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.106 10:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.106 10:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.106 10:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5183540 kB' 'MemAvailable: 9500056 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210676 kB' 'Inactive: 3368636 kB' 'Active(anon): 146112 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064564 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155416 kB' 'Mapped: 73476 kB' 'Shmem: 2616 kB' 'KReclaimable: 207172 kB' 'Slab: 299720 kB' 'SReclaimable: 207172 kB' 'SUnreclaim: 92548 kB' 'KernelStack: 4608 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 648332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.106 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.106 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.107 10:18:35 -- setup/common.sh@33 -- # echo 0 00:04:42.107 10:18:35 -- setup/common.sh@33 -- # return 0 00:04:42.107 10:18:35 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.107 10:18:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.107 10:18:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.107 10:18:35 -- setup/common.sh@18 -- # local node= 00:04:42.107 10:18:35 -- setup/common.sh@19 -- # local var val 00:04:42.107 10:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.107 10:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.107 10:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.107 10:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.107 10:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.107 10:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5183036 kB' 'MemAvailable: 9499552 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210936 kB' 'Inactive: 3368636 kB' 'Active(anon): 146372 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064564 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155676 kB' 'Mapped: 73476 kB' 'Shmem: 2616 kB' 'KReclaimable: 207172 kB' 'Slab: 299720 kB' 'SReclaimable: 207172 kB' 'SUnreclaim: 92548 kB' 'KernelStack: 4608 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 648332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.107 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.107 10:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.108 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.108 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.108 10:18:35 -- setup/common.sh@33 -- # echo 0 00:04:42.108 10:18:35 -- setup/common.sh@33 -- # return 0 00:04:42.108 10:18:35 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.108 nr_hugepages=1024 00:04:42.108 resv_hugepages=0 00:04:42.108 surplus_hugepages=0 00:04:42.108 10:18:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.108 10:18:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.108 10:18:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.108 anon_hugepages=0 00:04:42.108 10:18:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.108 10:18:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.108 10:18:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.108 10:18:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.108 10:18:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.108 10:18:35 -- setup/common.sh@18 -- # local node= 00:04:42.108 10:18:35 -- setup/common.sh@19 -- # local var val 00:04:42.108 10:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.108 10:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.108 10:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.108 10:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.108 10:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.108 10:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5183296 kB' 'MemAvailable: 9499828 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210648 kB' 'Inactive: 3368636 kB' 'Active(anon): 146084 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064564 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155212 kB' 'Mapped: 73460 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299716 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92528 kB' 'KernelStack: 4660 kB' 'PageTables: 3676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 647688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.109 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.109 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.110 10:18:35 -- setup/common.sh@33 -- # echo 1024 00:04:42.110 10:18:35 -- setup/common.sh@33 -- # return 0 00:04:42.110 10:18:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.110 10:18:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.110 10:18:35 -- setup/hugepages.sh@27 -- # local node 00:04:42.110 10:18:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.110 10:18:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.110 10:18:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.110 10:18:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.110 10:18:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.110 10:18:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.110 10:18:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.110 10:18:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.110 10:18:35 -- setup/common.sh@18 -- # local node=0 00:04:42.110 10:18:35 -- setup/common.sh@19 -- # local var val 00:04:42.110 10:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.110 10:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.110 10:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.110 10:18:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.110 10:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.110 10:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5183516 kB' 'MemUsed: 7067576 kB' 'Active: 1210532 kB' 'Inactive: 3368636 kB' 'Active(anon): 145968 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1064564 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 4442264 kB' 'Mapped: 73460 kB' 'AnonPages: 155220 kB' 'Shmem: 2616 kB' 'KernelStack: 4692 kB' 'PageTables: 3712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207188 kB' 'Slab: 299748 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.110 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.110 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # continue 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.111 10:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.111 10:18:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.111 10:18:35 -- setup/common.sh@33 -- # echo 0 00:04:42.111 10:18:35 -- setup/common.sh@33 -- # return 0 00:04:42.111 10:18:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.111 10:18:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.111 10:18:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.111 node0=1024 expecting 1024 00:04:42.111 10:18:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.111 10:18:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.111 10:18:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.111 00:04:42.111 real 0m1.291s 00:04:42.111 user 0m0.293s 00:04:42.111 sys 0m0.967s 00:04:42.111 10:18:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.111 ************************************ 00:04:42.111 END TEST default_setup 00:04:42.111 ************************************ 00:04:42.111 10:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.111 10:18:35 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:42.111 10:18:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.111 10:18:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.111 10:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.111 ************************************ 00:04:42.111 START TEST per_node_1G_alloc 00:04:42.111 ************************************ 00:04:42.111 10:18:35 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:42.111 10:18:35 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:42.111 10:18:35 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:42.111 10:18:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.111 10:18:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.111 10:18:35 -- setup/hugepages.sh@51 -- # shift 00:04:42.111 10:18:35 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:42.111 10:18:35 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.111 10:18:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.111 10:18:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.111 10:18:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.111 10:18:35 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:42.111 10:18:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.111 10:18:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.111 10:18:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.111 10:18:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.111 10:18:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.111 10:18:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.111 10:18:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.111 10:18:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:42.111 10:18:35 -- setup/hugepages.sh@73 -- # return 0 00:04:42.111 10:18:35 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:42.111 10:18:35 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:42.111 10:18:35 -- setup/hugepages.sh@146 -- # setup output 00:04:42.111 10:18:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.111 10:18:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:42.370 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.630 10:18:36 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:42.630 10:18:36 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:42.631 10:18:36 -- setup/hugepages.sh@89 -- # local node 00:04:42.631 10:18:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.631 10:18:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.631 10:18:36 -- setup/hugepages.sh@92 -- # local surp 00:04:42.631 10:18:36 -- setup/hugepages.sh@93 -- # local resv 00:04:42.631 10:18:36 -- setup/hugepages.sh@94 -- # local anon 00:04:42.631 10:18:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.631 10:18:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.631 10:18:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.631 10:18:36 -- setup/common.sh@18 -- # local node= 00:04:42.631 10:18:36 -- setup/common.sh@19 -- # local var val 00:04:42.631 10:18:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.631 10:18:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.631 10:18:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.631 10:18:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.631 10:18:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.631 10:18:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6228096 kB' 'MemAvailable: 10544660 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210892 kB' 'Inactive: 3368660 kB' 'Active(anon): 146316 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064576 kB' 'Inactive(file): 3366872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155876 kB' 'Mapped: 73196 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300020 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92832 kB' 'KernelStack: 4832 kB' 'PageTables: 4448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 653012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.631 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.632 10:18:36 -- setup/common.sh@33 -- # echo 0 00:04:42.632 10:18:36 -- setup/common.sh@33 -- # return 0 00:04:42.632 10:18:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.632 10:18:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.632 10:18:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.632 10:18:36 -- setup/common.sh@18 -- # local node= 00:04:42.632 10:18:36 -- setup/common.sh@19 -- # local var val 00:04:42.632 10:18:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.632 10:18:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.632 10:18:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.632 10:18:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.632 10:18:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.632 10:18:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6228096 kB' 'MemAvailable: 10544660 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210892 kB' 'Inactive: 3368660 kB' 'Active(anon): 146316 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064576 kB' 'Inactive(file): 3366872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155876 kB' 'Mapped: 73196 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300020 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92832 kB' 'KernelStack: 4832 kB' 'PageTables: 4448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 657988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.632 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.633 10:18:36 -- setup/common.sh@33 -- # echo 0 00:04:42.633 10:18:36 -- setup/common.sh@33 -- # return 0 00:04:42.633 10:18:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.633 10:18:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.633 10:18:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.633 10:18:36 -- setup/common.sh@18 -- # local node= 00:04:42.633 10:18:36 -- setup/common.sh@19 -- # local var val 00:04:42.633 10:18:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.633 10:18:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.633 10:18:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.633 10:18:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.633 10:18:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.633 10:18:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6228096 kB' 'MemAvailable: 10544660 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1211152 kB' 'Inactive: 3368660 kB' 'Active(anon): 146576 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064576 kB' 'Inactive(file): 3366872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155748 kB' 'Mapped: 73196 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300020 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92832 kB' 'KernelStack: 4832 kB' 'PageTables: 4448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 657988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.633 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.633 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.634 10:18:36 -- setup/common.sh@33 -- # echo 0 00:04:42.634 10:18:36 -- setup/common.sh@33 -- # return 0 00:04:42.634 10:18:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.634 nr_hugepages=512 00:04:42.634 10:18:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.634 resv_hugepages=0 00:04:42.634 10:18:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.634 surplus_hugepages=0 00:04:42.634 10:18:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.634 anon_hugepages=0 00:04:42.634 10:18:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.634 10:18:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.634 10:18:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.634 10:18:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.634 10:18:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.634 10:18:36 -- setup/common.sh@18 -- # local node= 00:04:42.634 10:18:36 -- setup/common.sh@19 -- # local var val 00:04:42.634 10:18:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.634 10:18:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.634 10:18:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.634 10:18:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.634 10:18:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.634 10:18:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6228444 kB' 'MemAvailable: 10545008 kB' 'Buffers: 37544 kB' 'Cached: 4404720 kB' 'SwapCached: 0 kB' 'Active: 1210980 kB' 'Inactive: 3368660 kB' 'Active(anon): 146404 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064576 kB' 'Inactive(file): 3366872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155260 kB' 'Mapped: 73332 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299836 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92648 kB' 'KernelStack: 4760 kB' 'PageTables: 4120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 662828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.634 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.634 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.635 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.635 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.636 10:18:36 -- setup/common.sh@33 -- # echo 512 00:04:42.636 10:18:36 -- setup/common.sh@33 -- # return 0 00:04:42.636 10:18:36 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.636 10:18:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.636 10:18:36 -- setup/hugepages.sh@27 -- # local node 00:04:42.636 10:18:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.636 10:18:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.636 10:18:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.636 10:18:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.636 10:18:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.636 10:18:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.636 10:18:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.636 10:18:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.636 10:18:36 -- setup/common.sh@18 -- # local node=0 00:04:42.636 10:18:36 -- setup/common.sh@19 -- # local var val 00:04:42.636 10:18:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.636 10:18:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.636 10:18:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.636 10:18:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.636 10:18:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.636 10:18:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6228704 kB' 'MemUsed: 6022388 kB' 'Active: 1210980 kB' 'Inactive: 3368660 kB' 'Active(anon): 146404 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064576 kB' 'Inactive(file): 3366872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 4442264 kB' 'Mapped: 73332 kB' 'AnonPages: 155392 kB' 'Shmem: 2616 kB' 'KernelStack: 4760 kB' 'PageTables: 4120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207188 kB' 'Slab: 299836 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # continue 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 10:18:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 10:18:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.636 10:18:36 -- setup/common.sh@33 -- # echo 0 00:04:42.636 10:18:36 -- setup/common.sh@33 -- # return 0 00:04:42.636 10:18:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.636 10:18:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.637 10:18:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.637 10:18:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.637 10:18:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.637 node0=512 expecting 512 00:04:42.637 10:18:36 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.637 00:04:42.637 real 0m0.635s 00:04:42.637 user 0m0.230s 00:04:42.637 sys 0m0.438s 00:04:42.637 10:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.637 10:18:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.637 ************************************ 00:04:42.637 END TEST per_node_1G_alloc 00:04:42.637 ************************************ 00:04:42.637 10:18:36 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:42.637 10:18:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.637 10:18:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.637 10:18:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.637 ************************************ 00:04:42.637 START TEST even_2G_alloc 00:04:42.637 ************************************ 00:04:42.637 10:18:36 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:42.637 10:18:36 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:42.637 10:18:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.637 10:18:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.637 10:18:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.637 10:18:36 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:42.637 10:18:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.637 10:18:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.637 10:18:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.637 10:18:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.637 10:18:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.637 10:18:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:42.637 10:18:36 -- setup/hugepages.sh@83 -- # : 0 00:04:42.637 10:18:36 -- setup/hugepages.sh@84 -- # : 0 00:04:42.637 10:18:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.637 10:18:36 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:42.637 10:18:36 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:42.637 10:18:36 -- setup/hugepages.sh@153 -- # setup output 00:04:42.637 10:18:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.637 10:18:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:42.895 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.466 10:18:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:43.466 10:18:37 -- setup/hugepages.sh@89 -- # local node 00:04:43.466 10:18:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.466 10:18:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.466 10:18:37 -- setup/hugepages.sh@92 -- # local surp 00:04:43.466 10:18:37 -- setup/hugepages.sh@93 -- # local resv 00:04:43.466 10:18:37 -- setup/hugepages.sh@94 -- # local anon 00:04:43.466 10:18:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.466 10:18:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.466 10:18:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.466 10:18:37 -- setup/common.sh@18 -- # local node= 00:04:43.466 10:18:37 -- setup/common.sh@19 -- # local var val 00:04:43.466 10:18:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.466 10:18:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.466 10:18:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.466 10:18:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.466 10:18:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.466 10:18:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.466 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.466 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.466 10:18:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5178436 kB' 'MemAvailable: 9495004 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1211076 kB' 'Inactive: 3368652 kB' 'Active(anon): 146488 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064588 kB' 'Inactive(file): 3366864 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155428 kB' 'Mapped: 73460 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299940 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92752 kB' 'KernelStack: 4752 kB' 'PageTables: 3936 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 654004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:43.466 10:18:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.466 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.466 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.466 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.466 10:18:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.466 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.466 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.467 10:18:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.467 10:18:37 -- setup/common.sh@33 -- # echo 0 00:04:43.467 10:18:37 -- setup/common.sh@33 -- # return 0 00:04:43.467 10:18:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.467 10:18:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.467 10:18:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.467 10:18:37 -- setup/common.sh@18 -- # local node= 00:04:43.467 10:18:37 -- setup/common.sh@19 -- # local var val 00:04:43.467 10:18:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.467 10:18:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.467 10:18:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.467 10:18:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.467 10:18:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.467 10:18:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.467 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5178696 kB' 'MemAvailable: 9495264 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1211076 kB' 'Inactive: 3368652 kB' 'Active(anon): 146488 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064588 kB' 'Inactive(file): 3366864 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155300 kB' 'Mapped: 73460 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299940 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92752 kB' 'KernelStack: 4752 kB' 'PageTables: 3936 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 654004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.468 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.468 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.469 10:18:37 -- setup/common.sh@33 -- # echo 0 00:04:43.469 10:18:37 -- setup/common.sh@33 -- # return 0 00:04:43.469 10:18:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.469 10:18:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.469 10:18:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.469 10:18:37 -- setup/common.sh@18 -- # local node= 00:04:43.469 10:18:37 -- setup/common.sh@19 -- # local var val 00:04:43.469 10:18:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.469 10:18:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.469 10:18:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.469 10:18:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.469 10:18:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.469 10:18:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5178696 kB' 'MemAvailable: 9495264 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1211336 kB' 'Inactive: 3368652 kB' 'Active(anon): 146748 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064588 kB' 'Inactive(file): 3366864 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155820 kB' 'Mapped: 73460 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299940 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92752 kB' 'KernelStack: 4752 kB' 'PageTables: 3936 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 658824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.469 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.469 10:18:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.470 10:18:37 -- setup/common.sh@33 -- # echo 0 00:04:43.470 10:18:37 -- setup/common.sh@33 -- # return 0 00:04:43.470 10:18:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.470 nr_hugepages=1024 00:04:43.470 10:18:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.470 resv_hugepages=0 00:04:43.470 surplus_hugepages=0 00:04:43.470 10:18:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.470 10:18:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.470 anon_hugepages=0 00:04:43.470 10:18:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.470 10:18:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.470 10:18:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.470 10:18:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.470 10:18:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.470 10:18:37 -- setup/common.sh@18 -- # local node= 00:04:43.470 10:18:37 -- setup/common.sh@19 -- # local var val 00:04:43.470 10:18:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.470 10:18:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.470 10:18:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.470 10:18:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.470 10:18:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.470 10:18:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5179200 kB' 'MemAvailable: 9495768 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210760 kB' 'Inactive: 3368652 kB' 'Active(anon): 146172 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064588 kB' 'Inactive(file): 3366864 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155724 kB' 'Mapped: 73200 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 299940 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92752 kB' 'KernelStack: 4772 kB' 'PageTables: 3856 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 669224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.470 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.470 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.471 10:18:37 -- setup/common.sh@33 -- # echo 1024 00:04:43.471 10:18:37 -- setup/common.sh@33 -- # return 0 00:04:43.471 10:18:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.471 10:18:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.471 10:18:37 -- setup/hugepages.sh@27 -- # local node 00:04:43.471 10:18:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.471 10:18:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.471 10:18:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.471 10:18:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.471 10:18:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.471 10:18:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.471 10:18:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.471 10:18:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.471 10:18:37 -- setup/common.sh@18 -- # local node=0 00:04:43.471 10:18:37 -- setup/common.sh@19 -- # local var val 00:04:43.471 10:18:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.471 10:18:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.471 10:18:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.471 10:18:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.471 10:18:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.471 10:18:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5179460 kB' 'MemUsed: 7071632 kB' 'Active: 1211020 kB' 'Inactive: 3368652 kB' 'Active(anon): 146432 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064588 kB' 'Inactive(file): 3366864 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 4442268 kB' 'Mapped: 73200 kB' 'AnonPages: 155596 kB' 'Shmem: 2616 kB' 'KernelStack: 4772 kB' 'PageTables: 3856 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207188 kB' 'Slab: 299940 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.471 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.471 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # continue 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.472 10:18:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.472 10:18:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.472 10:18:37 -- setup/common.sh@33 -- # echo 0 00:04:43.472 10:18:37 -- setup/common.sh@33 -- # return 0 00:04:43.472 10:18:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.472 10:18:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.472 10:18:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.472 10:18:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.472 10:18:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.472 node0=1024 expecting 1024 00:04:43.472 10:18:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.472 00:04:43.472 real 0m0.866s 00:04:43.472 user 0m0.271s 00:04:43.472 sys 0m0.630s 00:04:43.472 10:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.472 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:04:43.472 ************************************ 00:04:43.472 END TEST even_2G_alloc 00:04:43.472 ************************************ 00:04:43.472 10:18:37 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:43.472 10:18:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.472 10:18:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.472 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:04:43.730 ************************************ 00:04:43.730 START TEST odd_alloc 00:04:43.731 ************************************ 00:04:43.731 10:18:37 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:43.731 10:18:37 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:43.731 10:18:37 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:43.731 10:18:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:43.731 10:18:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.731 10:18:37 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.731 10:18:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.731 10:18:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:43.731 10:18:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.731 10:18:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.731 10:18:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.731 10:18:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:43.731 10:18:37 -- setup/hugepages.sh@83 -- # : 0 00:04:43.731 10:18:37 -- setup/hugepages.sh@84 -- # : 0 00:04:43.731 10:18:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.731 10:18:37 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:43.731 10:18:37 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:43.731 10:18:37 -- setup/hugepages.sh@160 -- # setup output 00:04:43.731 10:18:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.731 10:18:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:43.989 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.560 10:18:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:44.560 10:18:38 -- setup/hugepages.sh@89 -- # local node 00:04:44.560 10:18:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.560 10:18:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.560 10:18:38 -- setup/hugepages.sh@92 -- # local surp 00:04:44.560 10:18:38 -- setup/hugepages.sh@93 -- # local resv 00:04:44.560 10:18:38 -- setup/hugepages.sh@94 -- # local anon 00:04:44.560 10:18:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.560 10:18:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.560 10:18:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.560 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:44.560 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:44.560 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.560 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.560 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.560 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.560 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.560 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5177236 kB' 'MemAvailable: 9493804 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1211000 kB' 'Inactive: 3368640 kB' 'Active(anon): 146400 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155484 kB' 'Mapped: 73200 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300080 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92892 kB' 'KernelStack: 4704 kB' 'PageTables: 3840 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 664612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.560 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.560 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.561 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:44.561 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:44.561 10:18:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.561 10:18:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.561 10:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.561 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:44.561 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:44.561 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.561 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.561 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.561 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.561 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.561 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5177236 kB' 'MemAvailable: 9493804 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1211000 kB' 'Inactive: 3368640 kB' 'Active(anon): 146400 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155356 kB' 'Mapped: 73200 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300080 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92892 kB' 'KernelStack: 4704 kB' 'PageTables: 3840 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 664612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.561 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.561 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.562 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:44.562 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:44.562 10:18:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.562 10:18:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.562 10:18:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.562 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:44.562 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:44.562 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.562 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.562 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.562 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.562 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.562 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5177212 kB' 'MemAvailable: 9493780 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210828 kB' 'Inactive: 3368640 kB' 'Active(anon): 146228 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155068 kB' 'Mapped: 73200 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300080 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92892 kB' 'KernelStack: 4736 kB' 'PageTables: 3896 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 664612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.562 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.562 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.563 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:44.563 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:44.563 nr_hugepages=1025 00:04:44.563 resv_hugepages=0 00:04:44.563 surplus_hugepages=0 00:04:44.563 anon_hugepages=0 00:04:44.563 10:18:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.563 10:18:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:44.563 10:18:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.563 10:18:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.563 10:18:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.563 10:18:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:44.563 10:18:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:44.563 10:18:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.563 10:18:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.563 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:44.563 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:44.563 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.563 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.563 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.563 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.563 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.563 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5177456 kB' 'MemAvailable: 9494024 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210756 kB' 'Inactive: 3368640 kB' 'Active(anon): 146156 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155496 kB' 'Mapped: 73176 kB' 'Shmem: 2616 kB' 'KReclaimable: 207188 kB' 'Slab: 300096 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92908 kB' 'KernelStack: 4688 kB' 'PageTables: 3812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 662244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.563 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.563 10:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.564 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.564 10:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.565 10:18:38 -- setup/common.sh@33 -- # echo 1025 00:04:44.565 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:44.565 10:18:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:44.565 10:18:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.565 10:18:38 -- setup/hugepages.sh@27 -- # local node 00:04:44.565 10:18:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.565 10:18:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:44.565 10:18:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.565 10:18:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.565 10:18:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.565 10:18:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.565 10:18:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.565 10:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.565 10:18:38 -- setup/common.sh@18 -- # local node=0 00:04:44.565 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:44.565 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.565 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.565 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.565 10:18:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.565 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.565 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5177424 kB' 'MemUsed: 7073668 kB' 'Active: 1210756 kB' 'Inactive: 3368640 kB' 'Active(anon): 146156 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 4442268 kB' 'Mapped: 73176 kB' 'AnonPages: 155756 kB' 'Shmem: 2616 kB' 'KernelStack: 4756 kB' 'PageTables: 3812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207188 kB' 'Slab: 300096 kB' 'SReclaimable: 207188 kB' 'SUnreclaim: 92908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.565 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.565 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.566 10:18:38 -- setup/common.sh@32 -- # continue 00:04:44.566 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.566 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.566 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.566 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:44.566 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:44.566 node0=1025 expecting 1025 00:04:44.566 ************************************ 00:04:44.566 END TEST odd_alloc 00:04:44.566 ************************************ 00:04:44.566 10:18:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.566 10:18:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.566 10:18:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.566 10:18:38 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:44.566 10:18:38 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:44.566 00:04:44.566 real 0m0.920s 00:04:44.566 user 0m0.282s 00:04:44.566 sys 0m0.656s 00:04:44.566 10:18:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.566 10:18:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.566 10:18:38 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:44.566 10:18:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.566 10:18:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.566 10:18:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.566 ************************************ 00:04:44.566 START TEST custom_alloc 00:04:44.566 ************************************ 00:04:44.566 10:18:38 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:44.566 10:18:38 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:44.566 10:18:38 -- setup/hugepages.sh@169 -- # local node 00:04:44.566 10:18:38 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:44.566 10:18:38 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:44.566 10:18:38 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:44.566 10:18:38 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:44.566 10:18:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:44.566 10:18:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.566 10:18:38 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:44.566 10:18:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.566 10:18:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.566 10:18:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.566 10:18:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.566 10:18:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@83 -- # : 0 00:04:44.566 10:18:38 -- setup/hugepages.sh@84 -- # : 0 00:04:44.566 10:18:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:44.566 10:18:38 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:44.566 10:18:38 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:44.566 10:18:38 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:44.566 10:18:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.566 10:18:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.566 10:18:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.566 10:18:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.566 10:18:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:44.566 10:18:38 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:44.566 10:18:38 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:44.566 10:18:38 -- setup/hugepages.sh@78 -- # return 0 00:04:44.566 10:18:38 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:44.566 10:18:38 -- setup/hugepages.sh@187 -- # setup output 00:04:44.566 10:18:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.566 10:18:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:44.824 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.084 10:18:38 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:45.084 10:18:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:45.084 10:18:38 -- setup/hugepages.sh@89 -- # local node 00:04:45.084 10:18:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.084 10:18:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.084 10:18:38 -- setup/hugepages.sh@92 -- # local surp 00:04:45.084 10:18:38 -- setup/hugepages.sh@93 -- # local resv 00:04:45.084 10:18:38 -- setup/hugepages.sh@94 -- # local anon 00:04:45.084 10:18:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.084 10:18:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.084 10:18:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.084 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:45.084 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:45.084 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.084 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.084 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.084 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.084 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.084 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6226568 kB' 'MemAvailable: 10543152 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210516 kB' 'Inactive: 3368640 kB' 'Active(anon): 145916 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155476 kB' 'Mapped: 73224 kB' 'Shmem: 2616 kB' 'KReclaimable: 207204 kB' 'Slab: 300072 kB' 'SReclaimable: 207204 kB' 'SUnreclaim: 92868 kB' 'KernelStack: 4720 kB' 'PageTables: 3880 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 654620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.084 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.084 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.085 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:45.085 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:45.085 10:18:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.085 10:18:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.085 10:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.085 10:18:38 -- setup/common.sh@18 -- # local node= 00:04:45.085 10:18:38 -- setup/common.sh@19 -- # local var val 00:04:45.085 10:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.085 10:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.085 10:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.085 10:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.085 10:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.085 10:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6226820 kB' 'MemAvailable: 10543404 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210600 kB' 'Inactive: 3368640 kB' 'Active(anon): 146000 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155172 kB' 'Mapped: 73224 kB' 'Shmem: 2616 kB' 'KReclaimable: 207204 kB' 'Slab: 300072 kB' 'SReclaimable: 207204 kB' 'SUnreclaim: 92868 kB' 'KernelStack: 4688 kB' 'PageTables: 3832 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 660336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.085 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.085 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.086 10:18:38 -- setup/common.sh@33 -- # echo 0 00:04:45.086 10:18:38 -- setup/common.sh@33 -- # return 0 00:04:45.086 10:18:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.086 10:18:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.086 10:18:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.086 10:18:39 -- setup/common.sh@18 -- # local node= 00:04:45.086 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:45.086 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.086 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.086 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.086 10:18:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.086 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.086 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6227040 kB' 'MemAvailable: 10543624 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210548 kB' 'Inactive: 3368640 kB' 'Active(anon): 145948 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155500 kB' 'Mapped: 73224 kB' 'Shmem: 2616 kB' 'KReclaimable: 207204 kB' 'Slab: 300072 kB' 'SReclaimable: 207204 kB' 'SUnreclaim: 92868 kB' 'KernelStack: 4656 kB' 'PageTables: 3760 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 660336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.086 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.086 10:18:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.346 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.346 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.347 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.347 10:18:39 -- setup/common.sh@33 -- # echo 0 00:04:45.347 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:45.347 nr_hugepages=512 00:04:45.347 resv_hugepages=0 00:04:45.347 surplus_hugepages=0 00:04:45.347 anon_hugepages=0 00:04:45.347 10:18:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.347 10:18:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:45.347 10:18:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.347 10:18:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.347 10:18:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.347 10:18:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.347 10:18:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:45.347 10:18:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.347 10:18:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.347 10:18:39 -- setup/common.sh@18 -- # local node= 00:04:45.347 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:45.347 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.347 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.347 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.347 10:18:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.347 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.347 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.347 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6227300 kB' 'MemAvailable: 10543884 kB' 'Buffers: 37544 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1210528 kB' 'Inactive: 3368640 kB' 'Active(anon): 145928 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 155396 kB' 'Mapped: 73224 kB' 'Shmem: 2616 kB' 'KReclaimable: 207204 kB' 'Slab: 300072 kB' 'SReclaimable: 207204 kB' 'SUnreclaim: 92868 kB' 'KernelStack: 4740 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 658908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.348 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.348 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.349 10:18:39 -- setup/common.sh@33 -- # echo 512 00:04:45.349 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:45.349 10:18:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.349 10:18:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.349 10:18:39 -- setup/hugepages.sh@27 -- # local node 00:04:45.349 10:18:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.349 10:18:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.349 10:18:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.349 10:18:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.349 10:18:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.349 10:18:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.349 10:18:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.349 10:18:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.349 10:18:39 -- setup/common.sh@18 -- # local node=0 00:04:45.349 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:45.349 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.349 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.349 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.349 10:18:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.349 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.349 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6246500 kB' 'MemUsed: 6004592 kB' 'Active: 1191536 kB' 'Inactive: 3368640 kB' 'Active(anon): 126936 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064600 kB' 'Inactive(file): 3366852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 4442268 kB' 'Mapped: 72444 kB' 'AnonPages: 135104 kB' 'Shmem: 2616 kB' 'KernelStack: 4640 kB' 'PageTables: 3716 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207204 kB' 'Slab: 299552 kB' 'SReclaimable: 207204 kB' 'SUnreclaim: 92348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.349 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.349 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.350 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.350 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.350 10:18:39 -- setup/common.sh@32 -- # continue 00:04:45.350 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.350 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.350 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.350 10:18:39 -- setup/common.sh@33 -- # echo 0 00:04:45.350 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:45.350 node0=512 expecting 512 00:04:45.350 ************************************ 00:04:45.350 END TEST custom_alloc 00:04:45.350 ************************************ 00:04:45.350 10:18:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.350 10:18:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.350 10:18:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.350 10:18:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.350 10:18:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.350 10:18:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.350 00:04:45.350 real 0m0.698s 00:04:45.350 user 0m0.206s 00:04:45.350 sys 0m0.482s 00:04:45.350 10:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.350 10:18:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.350 10:18:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:45.350 10:18:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.350 10:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.350 10:18:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.350 ************************************ 00:04:45.350 START TEST no_shrink_alloc 00:04:45.350 ************************************ 00:04:45.350 10:18:39 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:45.350 10:18:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:45.350 10:18:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.350 10:18:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.350 10:18:39 -- setup/hugepages.sh@51 -- # shift 00:04:45.350 10:18:39 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:45.350 10:18:39 -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.350 10:18:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.350 10:18:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.350 10:18:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.350 10:18:39 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:45.350 10:18:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.350 10:18:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.350 10:18:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.350 10:18:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.350 10:18:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.350 10:18:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.350 10:18:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.350 10:18:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:45.350 10:18:39 -- setup/hugepages.sh@73 -- # return 0 00:04:45.350 10:18:39 -- setup/hugepages.sh@198 -- # setup output 00:04:45.350 10:18:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.350 10:18:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:45.608 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.177 10:18:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.177 10:18:39 -- setup/hugepages.sh@89 -- # local node 00:04:46.177 10:18:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.177 10:18:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.177 10:18:39 -- setup/hugepages.sh@92 -- # local surp 00:04:46.177 10:18:39 -- setup/hugepages.sh@93 -- # local resv 00:04:46.177 10:18:39 -- setup/hugepages.sh@94 -- # local anon 00:04:46.177 10:18:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.177 10:18:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.177 10:18:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.177 10:18:39 -- setup/common.sh@18 -- # local node= 00:04:46.177 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:46.177 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.177 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.177 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.177 10:18:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.177 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.177 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201428 kB' 'MemAvailable: 9518008 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190292 kB' 'Inactive: 3368636 kB' 'Active(anon): 125680 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064612 kB' 'Inactive(file): 3366848 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135220 kB' 'Mapped: 72456 kB' 'Shmem: 2616 kB' 'KReclaimable: 207192 kB' 'Slab: 298676 kB' 'SReclaimable: 207192 kB' 'SUnreclaim: 91484 kB' 'KernelStack: 4272 kB' 'PageTables: 2900 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 602660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.177 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.177 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.178 10:18:39 -- setup/common.sh@33 -- # echo 0 00:04:46.178 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:46.178 10:18:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.178 10:18:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.178 10:18:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.178 10:18:39 -- setup/common.sh@18 -- # local node= 00:04:46.178 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:46.178 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.178 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.178 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.178 10:18:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.178 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.178 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201428 kB' 'MemAvailable: 9518008 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190644 kB' 'Inactive: 3368636 kB' 'Active(anon): 126032 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064612 kB' 'Inactive(file): 3366848 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135316 kB' 'Mapped: 72716 kB' 'Shmem: 2616 kB' 'KReclaimable: 207192 kB' 'Slab: 298676 kB' 'SReclaimable: 207192 kB' 'SUnreclaim: 91484 kB' 'KernelStack: 4256 kB' 'PageTables: 2888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 597668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.178 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.178 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.179 10:18:39 -- setup/common.sh@33 -- # echo 0 00:04:46.179 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:46.179 10:18:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.179 10:18:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.179 10:18:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.179 10:18:39 -- setup/common.sh@18 -- # local node= 00:04:46.179 10:18:39 -- setup/common.sh@19 -- # local var val 00:04:46.179 10:18:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.179 10:18:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.179 10:18:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.179 10:18:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.179 10:18:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.179 10:18:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201900 kB' 'MemAvailable: 9518480 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190532 kB' 'Inactive: 3368636 kB' 'Active(anon): 125920 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064612 kB' 'Inactive(file): 3366848 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 134920 kB' 'Mapped: 72632 kB' 'Shmem: 2616 kB' 'KReclaimable: 207192 kB' 'Slab: 298764 kB' 'SReclaimable: 207192 kB' 'SUnreclaim: 91572 kB' 'KernelStack: 4360 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 597668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.179 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.179 10:18:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.180 10:18:39 -- setup/common.sh@33 -- # echo 0 00:04:46.180 10:18:39 -- setup/common.sh@33 -- # return 0 00:04:46.180 nr_hugepages=1024 00:04:46.180 resv_hugepages=0 00:04:46.180 surplus_hugepages=0 00:04:46.180 anon_hugepages=0 00:04:46.180 10:18:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.180 10:18:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.180 10:18:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.180 10:18:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.180 10:18:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.180 10:18:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.180 10:18:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.180 10:18:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.180 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.180 10:18:40 -- setup/common.sh@18 -- # local node= 00:04:46.180 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.180 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.180 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.180 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.180 10:18:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.180 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.180 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.180 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5202264 kB' 'MemAvailable: 9518844 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190800 kB' 'Inactive: 3368636 kB' 'Active(anon): 126188 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064612 kB' 'Inactive(file): 3366848 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135028 kB' 'Mapped: 72536 kB' 'Shmem: 2616 kB' 'KReclaimable: 207192 kB' 'Slab: 298772 kB' 'SReclaimable: 207192 kB' 'SUnreclaim: 91580 kB' 'KernelStack: 4352 kB' 'PageTables: 3032 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 597008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.180 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.180 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.180 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.180 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.180 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.181 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.181 10:18:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.182 10:18:40 -- setup/common.sh@33 -- # echo 1024 00:04:46.182 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.182 10:18:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.182 10:18:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.182 10:18:40 -- setup/hugepages.sh@27 -- # local node 00:04:46.182 10:18:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.182 10:18:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.182 10:18:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.182 10:18:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.182 10:18:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.182 10:18:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.182 10:18:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.182 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.182 10:18:40 -- setup/common.sh@18 -- # local node=0 00:04:46.182 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.182 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.182 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.182 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.182 10:18:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.182 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.182 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5202028 kB' 'MemUsed: 7049064 kB' 'Active: 1190472 kB' 'Inactive: 3368636 kB' 'Active(anon): 125860 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064612 kB' 'Inactive(file): 3366848 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'FilePages: 4442276 kB' 'Mapped: 72324 kB' 'AnonPages: 135056 kB' 'Shmem: 2616 kB' 'KernelStack: 4324 kB' 'PageTables: 2876 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207196 kB' 'Slab: 298776 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.182 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.182 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.183 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.183 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.183 10:18:40 -- setup/common.sh@33 -- # echo 0 00:04:46.183 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.183 node0=1024 expecting 1024 00:04:46.183 10:18:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.183 10:18:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.183 10:18:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.183 10:18:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.183 10:18:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.183 10:18:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.183 10:18:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.183 10:18:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.183 10:18:40 -- setup/hugepages.sh@202 -- # setup output 00:04:46.183 10:18:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.183 10:18:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:46.441 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.441 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.702 10:18:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.702 10:18:40 -- setup/hugepages.sh@89 -- # local node 00:04:46.702 10:18:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.702 10:18:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.702 10:18:40 -- setup/hugepages.sh@92 -- # local surp 00:04:46.702 10:18:40 -- setup/hugepages.sh@93 -- # local resv 00:04:46.702 10:18:40 -- setup/hugepages.sh@94 -- # local anon 00:04:46.702 10:18:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.702 10:18:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.702 10:18:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.702 10:18:40 -- setup/common.sh@18 -- # local node= 00:04:46.702 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.702 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.702 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.702 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.702 10:18:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.702 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.702 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.702 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.702 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201304 kB' 'MemAvailable: 9517888 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1191408 kB' 'Inactive: 3368624 kB' 'Active(anon): 126784 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064624 kB' 'Inactive(file): 3366836 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135760 kB' 'Mapped: 72536 kB' 'Shmem: 2616 kB' 'KReclaimable: 207196 kB' 'Slab: 298972 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91776 kB' 'KernelStack: 4380 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 603496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.703 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.703 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.704 10:18:40 -- setup/common.sh@33 -- # echo 0 00:04:46.704 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.704 10:18:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.704 10:18:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.704 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.704 10:18:40 -- setup/common.sh@18 -- # local node= 00:04:46.704 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.704 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.704 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.704 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.704 10:18:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.704 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.704 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201464 kB' 'MemAvailable: 9518048 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1191328 kB' 'Inactive: 3368624 kB' 'Active(anon): 126704 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064624 kB' 'Inactive(file): 3366836 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135596 kB' 'Mapped: 72520 kB' 'Shmem: 2616 kB' 'KReclaimable: 207196 kB' 'Slab: 298836 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91640 kB' 'KernelStack: 4352 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 608876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.704 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.704 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.705 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.705 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.705 10:18:40 -- setup/common.sh@33 -- # echo 0 00:04:46.705 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.705 10:18:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.705 10:18:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.705 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.705 10:18:40 -- setup/common.sh@18 -- # local node= 00:04:46.705 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.706 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.706 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.706 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.706 10:18:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.706 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.706 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201504 kB' 'MemAvailable: 9518088 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190680 kB' 'Inactive: 3368624 kB' 'Active(anon): 126056 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064624 kB' 'Inactive(file): 3366836 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 134828 kB' 'Mapped: 72312 kB' 'Shmem: 2616 kB' 'KReclaimable: 207196 kB' 'Slab: 298988 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91792 kB' 'KernelStack: 4344 kB' 'PageTables: 3184 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 608876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.706 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.706 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.707 10:18:40 -- setup/common.sh@33 -- # echo 0 00:04:46.707 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.707 nr_hugepages=1024 00:04:46.707 resv_hugepages=0 00:04:46.707 surplus_hugepages=0 00:04:46.707 anon_hugepages=0 00:04:46.707 10:18:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.707 10:18:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.707 10:18:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.707 10:18:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.707 10:18:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.707 10:18:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.707 10:18:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.707 10:18:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.707 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.707 10:18:40 -- setup/common.sh@18 -- # local node= 00:04:46.707 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.707 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.707 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.707 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.707 10:18:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.707 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.707 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.707 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.707 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5202232 kB' 'MemAvailable: 9518816 kB' 'Buffers: 37552 kB' 'Cached: 4404724 kB' 'SwapCached: 0 kB' 'Active: 1190324 kB' 'Inactive: 3368624 kB' 'Active(anon): 125700 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064624 kB' 'Inactive(file): 3366836 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 135344 kB' 'Mapped: 72324 kB' 'Shmem: 2616 kB' 'KReclaimable: 207196 kB' 'Slab: 298828 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91632 kB' 'KernelStack: 4336 kB' 'PageTables: 2988 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 613716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.708 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.708 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.709 10:18:40 -- setup/common.sh@33 -- # echo 1024 00:04:46.709 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.709 10:18:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.709 10:18:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.709 10:18:40 -- setup/hugepages.sh@27 -- # local node 00:04:46.709 10:18:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.709 10:18:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.709 10:18:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.709 10:18:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.709 10:18:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.709 10:18:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.709 10:18:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.709 10:18:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.709 10:18:40 -- setup/common.sh@18 -- # local node=0 00:04:46.709 10:18:40 -- setup/common.sh@19 -- # local var val 00:04:46.709 10:18:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.709 10:18:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.709 10:18:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.709 10:18:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.709 10:18:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.709 10:18:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.709 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.709 10:18:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5201940 kB' 'MemUsed: 7049152 kB' 'Active: 1190744 kB' 'Inactive: 3368624 kB' 'Active(anon): 126120 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064624 kB' 'Inactive(file): 3366836 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'FilePages: 4442276 kB' 'Mapped: 72324 kB' 'AnonPages: 135804 kB' 'Shmem: 2616 kB' 'KernelStack: 4420 kB' 'PageTables: 3016 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207196 kB' 'Slab: 298828 kB' 'SReclaimable: 207196 kB' 'SUnreclaim: 91632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.709 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.710 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.710 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.711 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.711 10:18:40 -- setup/common.sh@32 -- # continue 00:04:46.711 10:18:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.711 10:18:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.711 10:18:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.711 10:18:40 -- setup/common.sh@33 -- # echo 0 00:04:46.711 10:18:40 -- setup/common.sh@33 -- # return 0 00:04:46.711 node0=1024 expecting 1024 00:04:46.711 ************************************ 00:04:46.711 END TEST no_shrink_alloc 00:04:46.711 ************************************ 00:04:46.711 10:18:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.711 10:18:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.711 10:18:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.711 10:18:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.711 10:18:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.711 10:18:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.711 00:04:46.711 real 0m1.375s 00:04:46.711 user 0m0.560s 00:04:46.711 sys 0m0.792s 00:04:46.711 10:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.711 10:18:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.711 10:18:40 -- setup/hugepages.sh@217 -- # clear_hp 00:04:46.711 10:18:40 -- setup/hugepages.sh@37 -- # local node hp 00:04:46.711 10:18:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.711 10:18:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.711 10:18:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.711 10:18:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.711 10:18:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:46.711 10:18:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.711 10:18:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.711 ************************************ 00:04:46.711 END TEST hugepages 00:04:46.711 ************************************ 00:04:46.711 00:04:46.711 real 0m6.206s 00:04:46.711 user 0m2.073s 00:04:46.711 sys 0m4.142s 00:04:46.711 10:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.711 10:18:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.711 10:18:40 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:46.711 10:18:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.711 10:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.711 10:18:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.711 ************************************ 00:04:46.711 START TEST driver 00:04:46.711 ************************************ 00:04:46.711 10:18:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:46.969 * Looking for test storage... 00:04:46.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.969 10:18:40 -- setup/driver.sh@68 -- # setup reset 00:04:46.969 10:18:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.969 10:18:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.228 10:18:41 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:47.228 10:18:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.228 10:18:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.228 10:18:41 -- common/autotest_common.sh@10 -- # set +x 00:04:47.228 ************************************ 00:04:47.228 START TEST guess_driver 00:04:47.228 ************************************ 00:04:47.228 10:18:41 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:47.228 10:18:41 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:47.228 10:18:41 -- setup/driver.sh@47 -- # local fail=0 00:04:47.228 10:18:41 -- setup/driver.sh@49 -- # pick_driver 00:04:47.228 10:18:41 -- setup/driver.sh@36 -- # vfio 00:04:47.228 10:18:41 -- setup/driver.sh@21 -- # local iommu_grups 00:04:47.228 10:18:41 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:47.228 10:18:41 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:47.228 10:18:41 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:47.228 10:18:41 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:47.228 10:18:41 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:47.228 10:18:41 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:47.228 10:18:41 -- setup/driver.sh@32 -- # return 1 00:04:47.228 10:18:41 -- setup/driver.sh@38 -- # uio 00:04:47.228 10:18:41 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:04:47.228 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:47.228 10:18:41 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:47.228 Looking for driver=uio_pci_generic 00:04:47.228 10:18:41 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:47.228 10:18:41 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:47.228 10:18:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.228 10:18:41 -- setup/driver.sh@45 -- # setup output config 00:04:47.228 10:18:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.228 10:18:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:47.804 10:18:41 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:47.804 10:18:41 -- setup/driver.sh@58 -- # continue 00:04:47.804 10:18:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.804 10:18:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.804 10:18:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:47.804 10:18:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.755 10:18:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:48.755 10:18:42 -- setup/driver.sh@65 -- # setup reset 00:04:48.755 10:18:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.755 10:18:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.320 00:04:49.320 real 0m1.928s 00:04:49.320 user 0m0.469s 00:04:49.320 sys 0m1.397s 00:04:49.320 ************************************ 00:04:49.320 END TEST guess_driver 00:04:49.320 ************************************ 00:04:49.320 10:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.320 10:18:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.320 00:04:49.320 real 0m2.476s 00:04:49.320 user 0m0.727s 00:04:49.320 sys 0m1.691s 00:04:49.320 10:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.320 10:18:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.320 ************************************ 00:04:49.320 END TEST driver 00:04:49.320 ************************************ 00:04:49.320 10:18:43 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.320 10:18:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.320 10:18:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.320 10:18:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.320 ************************************ 00:04:49.320 START TEST devices 00:04:49.320 ************************************ 00:04:49.320 10:18:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.320 * Looking for test storage... 00:04:49.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.320 10:18:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:49.320 10:18:43 -- setup/devices.sh@192 -- # setup reset 00:04:49.320 10:18:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.320 10:18:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.886 10:18:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.886 10:18:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:49.886 10:18:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:49.886 10:18:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:49.886 10:18:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.886 10:18:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:49.886 10:18:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:49.886 10:18:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.886 10:18:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.886 10:18:43 -- setup/devices.sh@196 -- # blocks=() 00:04:49.886 10:18:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.886 10:18:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.886 10:18:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.886 10:18:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.886 10:18:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.886 10:18:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.886 10:18:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.886 10:18:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:49.886 10:18:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:49.886 10:18:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.886 10:18:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:49.886 10:18:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.886 No valid GPT data, bailing 00:04:49.886 10:18:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.886 10:18:43 -- scripts/common.sh@393 -- # pt= 00:04:49.886 10:18:43 -- scripts/common.sh@394 -- # return 1 00:04:49.886 10:18:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.886 10:18:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.886 10:18:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.886 10:18:43 -- setup/common.sh@80 -- # echo 5368709120 00:04:49.886 10:18:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:49.886 10:18:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.886 10:18:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:49.886 10:18:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:49.886 10:18:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.886 10:18:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.886 10:18:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.886 10:18:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.886 10:18:43 -- common/autotest_common.sh@10 -- # set +x 00:04:49.886 ************************************ 00:04:49.886 START TEST nvme_mount 00:04:49.886 ************************************ 00:04:49.886 10:18:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:49.886 10:18:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.886 10:18:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.886 10:18:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.886 10:18:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.886 10:18:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.886 10:18:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.886 10:18:43 -- setup/common.sh@40 -- # local part_no=1 00:04:49.886 10:18:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.886 10:18:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.886 10:18:43 -- setup/common.sh@44 -- # parts=() 00:04:49.886 10:18:43 -- setup/common.sh@44 -- # local parts 00:04:49.886 10:18:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.886 10:18:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.886 10:18:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.886 10:18:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.886 10:18:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.886 10:18:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:49.886 10:18:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.886 10:18:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.820 Creating new GPT entries in memory. 00:04:50.820 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.820 other utilities. 00:04:50.820 10:18:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.820 10:18:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.820 10:18:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.820 10:18:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.820 10:18:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.193 Creating new GPT entries in memory. 00:04:52.193 The operation has completed successfully. 00:04:52.193 10:18:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.193 10:18:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.193 10:18:45 -- setup/common.sh@62 -- # wait 98334 00:04:52.193 10:18:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.193 10:18:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:52.193 10:18:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.193 10:18:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:52.193 10:18:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:52.193 10:18:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.193 10:18:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.193 10:18:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:52.193 10:18:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:52.193 10:18:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.193 10:18:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.193 10:18:45 -- setup/devices.sh@53 -- # local found=0 00:04:52.193 10:18:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.193 10:18:45 -- setup/devices.sh@56 -- # : 00:04:52.193 10:18:45 -- setup/devices.sh@59 -- # local pci status 00:04:52.193 10:18:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.193 10:18:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:52.193 10:18:45 -- setup/devices.sh@47 -- # setup output config 00:04:52.193 10:18:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.193 10:18:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.193 10:18:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.193 10:18:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.193 10:18:46 -- setup/devices.sh@63 -- # found=1 00:04:52.193 10:18:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.193 10:18:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.193 10:18:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.193 10:18:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.193 10:18:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 10:18:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.567 10:18:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:53.567 10:18:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.567 10:18:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.567 10:18:47 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:53.567 10:18:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.567 10:18:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.567 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.567 10:18:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.567 10:18:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.567 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.567 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.567 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.567 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.567 10:18:47 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:53.567 10:18:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:53.567 10:18:47 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.567 10:18:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.567 10:18:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.567 10:18:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.567 10:18:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.567 10:18:47 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.567 10:18:47 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.567 10:18:47 -- setup/devices.sh@53 -- # local found=0 00:04:53.567 10:18:47 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.567 10:18:47 -- setup/devices.sh@56 -- # : 00:04:53.567 10:18:47 -- setup/devices.sh@59 -- # local pci status 00:04:53.567 10:18:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 10:18:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.567 10:18:47 -- setup/devices.sh@47 -- # setup output config 00:04:53.567 10:18:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.567 10:18:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.567 10:18:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.567 10:18:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.567 10:18:47 -- setup/devices.sh@63 -- # found=1 00:04:53.567 10:18:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 10:18:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.567 10:18:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.825 10:18:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.825 10:18:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.759 10:18:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.759 10:18:48 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.759 10:18:48 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.759 10:18:48 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.759 10:18:48 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.759 10:18:48 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.759 10:18:48 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:54.759 10:18:48 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.759 10:18:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.759 10:18:48 -- setup/devices.sh@50 -- # local mount_point= 00:04:54.759 10:18:48 -- setup/devices.sh@51 -- # local test_file= 00:04:54.759 10:18:48 -- setup/devices.sh@53 -- # local found=0 00:04:54.759 10:18:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.759 10:18:48 -- setup/devices.sh@59 -- # local pci status 00:04:54.759 10:18:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.759 10:18:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.759 10:18:48 -- setup/devices.sh@47 -- # setup output config 00:04:54.759 10:18:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.759 10:18:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.016 10:18:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.016 10:18:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.016 10:18:48 -- setup/devices.sh@63 -- # found=1 00:04:55.016 10:18:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.016 10:18:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.016 10:18:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.016 10:18:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.016 10:18:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.014 10:18:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.014 10:18:49 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.014 10:18:49 -- setup/devices.sh@68 -- # return 0 00:04:56.014 10:18:49 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.014 10:18:49 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.014 10:18:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.014 10:18:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.014 10:18:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.014 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.014 ************************************ 00:04:56.014 END TEST nvme_mount 00:04:56.014 ************************************ 00:04:56.014 00:04:56.014 real 0m6.261s 00:04:56.014 user 0m0.711s 00:04:56.014 sys 0m3.426s 00:04:56.014 10:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.014 10:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.271 10:18:49 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.272 10:18:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.272 10:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.272 10:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.272 ************************************ 00:04:56.272 START TEST dm_mount 00:04:56.272 ************************************ 00:04:56.272 10:18:49 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:56.272 10:18:49 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:56.272 10:18:49 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:56.272 10:18:49 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:56.272 10:18:49 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:56.272 10:18:49 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.272 10:18:49 -- setup/common.sh@40 -- # local part_no=2 00:04:56.272 10:18:49 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.272 10:18:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.272 10:18:49 -- setup/common.sh@44 -- # parts=() 00:04:56.272 10:18:49 -- setup/common.sh@44 -- # local parts 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.272 10:18:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.272 10:18:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.272 10:18:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.272 10:18:50 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.272 10:18:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.272 10:18:50 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.206 Creating new GPT entries in memory. 00:04:57.206 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.206 other utilities. 00:04:57.206 10:18:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.206 10:18:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.206 10:18:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.206 10:18:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.206 10:18:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.580 Creating new GPT entries in memory. 00:04:58.580 The operation has completed successfully. 00:04:58.580 10:18:52 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.580 10:18:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.580 10:18:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.580 10:18:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.580 10:18:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:59.515 The operation has completed successfully. 00:04:59.515 10:18:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.515 10:18:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.515 10:18:53 -- setup/common.sh@62 -- # wait 98838 00:04:59.515 10:18:53 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.515 10:18:53 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.515 10:18:53 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.515 10:18:53 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.515 10:18:53 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.515 10:18:53 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.515 10:18:53 -- setup/devices.sh@161 -- # break 00:04:59.515 10:18:53 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.515 10:18:53 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.515 10:18:53 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.515 10:18:53 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.515 10:18:53 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:59.515 10:18:53 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:59.515 10:18:53 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.515 10:18:53 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.515 10:18:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.515 10:18:53 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.515 10:18:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.515 10:18:53 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.515 10:18:53 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.515 10:18:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.515 10:18:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:59.515 10:18:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.515 10:18:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.515 10:18:53 -- setup/devices.sh@53 -- # local found=0 00:04:59.515 10:18:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.515 10:18:53 -- setup/devices.sh@56 -- # : 00:04:59.515 10:18:53 -- setup/devices.sh@59 -- # local pci status 00:04:59.515 10:18:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.515 10:18:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.515 10:18:53 -- setup/devices.sh@47 -- # setup output config 00:04:59.515 10:18:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.515 10:18:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.515 10:18:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.515 10:18:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.515 10:18:53 -- setup/devices.sh@63 -- # found=1 00:04:59.515 10:18:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.515 10:18:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.515 10:18:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.773 10:18:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.773 10:18:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.148 10:18:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.148 10:18:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:01.148 10:18:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.148 10:18:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.148 10:18:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:01.148 10:18:54 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.148 10:18:54 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:01.149 10:18:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:01.149 10:18:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:01.149 10:18:54 -- setup/devices.sh@50 -- # local mount_point= 00:05:01.149 10:18:54 -- setup/devices.sh@51 -- # local test_file= 00:05:01.149 10:18:54 -- setup/devices.sh@53 -- # local found=0 00:05:01.149 10:18:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.149 10:18:54 -- setup/devices.sh@59 -- # local pci status 00:05:01.149 10:18:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.149 10:18:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:01.149 10:18:54 -- setup/devices.sh@47 -- # setup output config 00:05:01.149 10:18:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.149 10:18:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.149 10:18:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.149 10:18:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:01.149 10:18:54 -- setup/devices.sh@63 -- # found=1 00:05:01.149 10:18:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.149 10:18:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.149 10:18:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.149 10:18:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.149 10:18:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.084 10:18:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.084 10:18:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.084 10:18:55 -- setup/devices.sh@68 -- # return 0 00:05:02.084 10:18:55 -- setup/devices.sh@187 -- # cleanup_dm 00:05:02.084 10:18:55 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.084 10:18:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.084 10:18:55 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:02.342 10:18:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.342 10:18:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:02.342 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.342 10:18:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:02.342 10:18:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:02.342 ************************************ 00:05:02.342 END TEST dm_mount 00:05:02.342 ************************************ 00:05:02.342 00:05:02.342 real 0m6.048s 00:05:02.342 user 0m0.449s 00:05:02.342 sys 0m2.418s 00:05:02.342 10:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.342 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.342 10:18:56 -- setup/devices.sh@1 -- # cleanup 00:05:02.342 10:18:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:02.342 10:18:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.342 10:18:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.342 10:18:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:02.342 10:18:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.342 10:18:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.342 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.342 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.342 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.342 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:02.342 10:18:56 -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.343 10:18:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.343 10:18:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.343 10:18:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.343 10:18:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:02.343 10:18:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.343 10:18:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:02.343 00:05:02.343 real 0m13.096s 00:05:02.343 user 0m1.568s 00:05:02.343 sys 0m6.146s 00:05:02.343 10:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.343 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.343 ************************************ 00:05:02.343 END TEST devices 00:05:02.343 ************************************ 00:05:02.343 00:05:02.343 real 0m26.683s 00:05:02.343 user 0m6.110s 00:05:02.343 sys 0m15.203s 00:05:02.343 10:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.343 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.343 ************************************ 00:05:02.343 END TEST setup.sh 00:05:02.343 ************************************ 00:05:02.343 10:18:56 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.601 Hugepages 00:05:02.601 node hugesize free / total 00:05:02.601 node0 1048576kB 0 / 0 00:05:02.601 node0 2048kB 2048 / 2048 00:05:02.601 00:05:02.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.601 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.859 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.859 10:18:56 -- spdk/autotest.sh@141 -- # uname -s 00:05:02.859 10:18:56 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:02.859 10:18:56 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:02.859 10:18:56 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.378 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.313 10:18:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.247 10:18:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.247 10:18:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.247 10:18:59 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.247 10:18:59 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.247 10:18:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.247 10:18:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.247 10:18:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.247 10:18:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.247 10:18:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.247 10:18:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:05.247 10:18:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:05.247 10:18:59 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.764 Waiting for block devices as requested 00:05:05.764 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.764 10:18:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:05.764 10:18:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:05.764 10:18:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:05.764 10:18:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:05.764 10:18:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:05.764 10:18:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:05.764 10:18:59 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:05.764 10:18:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:05.764 10:18:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:05.764 10:18:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:05.764 10:18:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:05.764 10:18:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:05.764 10:18:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:05.764 10:18:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:05.764 10:18:59 -- common/autotest_common.sh@1542 -- # continue 00:05:05.764 10:18:59 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:05.764 10:18:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.764 10:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:05.764 10:18:59 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:05.764 10:18:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.764 10:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:05.764 10:18:59 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:06.330 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.265 10:19:01 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:07.265 10:19:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:07.265 10:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:07.265 10:19:01 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:07.265 10:19:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:07.265 10:19:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.265 10:19:01 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:07.265 10:19:01 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:07.265 10:19:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:07.265 10:19:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:07.265 10:19:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:07.265 10:19:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.265 10:19:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.265 10:19:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:07.525 10:19:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:07.525 10:19:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:07.525 10:19:01 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:07.525 10:19:01 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:07.525 10:19:01 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:07.525 10:19:01 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:07.525 10:19:01 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:07.525 10:19:01 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:07.525 10:19:01 -- common/autotest_common.sh@1578 -- # return 0 00:05:07.525 10:19:01 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:07.525 10:19:01 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:07.525 10:19:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.525 10:19:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.525 10:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:07.525 ************************************ 00:05:07.525 START TEST unittest 00:05:07.525 ************************************ 00:05:07.525 10:19:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:07.525 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:07.525 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:07.525 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:07.525 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:07.525 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:07.525 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:07.525 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:07.525 ++ rpc_py=rpc_cmd 00:05:07.525 ++ set -e 00:05:07.525 ++ shopt -s nullglob 00:05:07.525 ++ shopt -s extglob 00:05:07.525 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:07.525 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:07.525 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:07.525 +++ CONFIG_FIO_PLUGIN=y 00:05:07.525 +++ CONFIG_NVME_CUSE=y 00:05:07.525 +++ CONFIG_RAID5F=y 00:05:07.525 +++ CONFIG_LTO=n 00:05:07.525 +++ CONFIG_SMA=n 00:05:07.525 +++ CONFIG_ISAL=y 00:05:07.525 +++ CONFIG_OPENSSL_PATH= 00:05:07.525 +++ CONFIG_IDXD_KERNEL=n 00:05:07.525 +++ CONFIG_URING_PATH= 00:05:07.525 +++ CONFIG_DAOS=n 00:05:07.525 +++ CONFIG_DPDK_LIB_DIR= 00:05:07.525 +++ CONFIG_OCF=n 00:05:07.525 +++ CONFIG_EXAMPLES=y 00:05:07.525 +++ CONFIG_RDMA_PROV=verbs 00:05:07.525 +++ CONFIG_ISCSI_INITIATOR=y 00:05:07.525 +++ CONFIG_VTUNE=n 00:05:07.525 +++ CONFIG_DPDK_INC_DIR= 00:05:07.525 +++ CONFIG_CET=n 00:05:07.525 +++ CONFIG_TESTS=y 00:05:07.525 +++ CONFIG_APPS=y 00:05:07.525 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:07.525 +++ CONFIG_DAOS_DIR= 00:05:07.525 +++ CONFIG_CRYPTO_MLX5=n 00:05:07.525 +++ CONFIG_XNVME=n 00:05:07.526 +++ CONFIG_UNIT_TESTS=y 00:05:07.526 +++ CONFIG_FUSE=n 00:05:07.526 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:07.526 +++ CONFIG_OCF_PATH= 00:05:07.526 +++ CONFIG_WPDK_DIR= 00:05:07.526 +++ CONFIG_VFIO_USER=n 00:05:07.526 +++ CONFIG_MAX_LCORES= 00:05:07.526 +++ CONFIG_ARCH=native 00:05:07.526 +++ CONFIG_TSAN=n 00:05:07.526 +++ CONFIG_VIRTIO=y 00:05:07.526 +++ CONFIG_IPSEC_MB=n 00:05:07.526 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:07.526 +++ CONFIG_ASAN=y 00:05:07.526 +++ CONFIG_SHARED=n 00:05:07.526 +++ CONFIG_VTUNE_DIR= 00:05:07.526 +++ CONFIG_RDMA_SET_TOS=y 00:05:07.526 +++ CONFIG_VBDEV_COMPRESS=n 00:05:07.526 +++ CONFIG_VFIO_USER_DIR= 00:05:07.526 +++ CONFIG_FUZZER_LIB= 00:05:07.526 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:07.526 +++ CONFIG_USDT=n 00:05:07.526 +++ CONFIG_URING_ZNS=n 00:05:07.526 +++ CONFIG_FC_PATH= 00:05:07.526 +++ CONFIG_COVERAGE=y 00:05:07.526 +++ CONFIG_CUSTOMOCF=n 00:05:07.526 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:07.526 +++ CONFIG_WERROR=y 00:05:07.526 +++ CONFIG_DEBUG=y 00:05:07.526 +++ CONFIG_RDMA=y 00:05:07.526 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:07.526 +++ CONFIG_FUZZER=n 00:05:07.526 +++ CONFIG_FC=n 00:05:07.526 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:07.526 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:07.526 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:07.526 +++ CONFIG_CROSS_PREFIX= 00:05:07.526 +++ CONFIG_PREFIX=/usr/local 00:05:07.526 +++ CONFIG_HAVE_LIBBSD=n 00:05:07.526 +++ CONFIG_UBSAN=y 00:05:07.526 +++ CONFIG_PGO_CAPTURE=n 00:05:07.526 +++ CONFIG_UBLK=n 00:05:07.526 +++ CONFIG_ISAL_CRYPTO=y 00:05:07.526 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:07.526 +++ CONFIG_CRYPTO=n 00:05:07.526 +++ CONFIG_RBD=n 00:05:07.526 +++ CONFIG_LIBDIR= 00:05:07.526 +++ CONFIG_IPSEC_MB_DIR= 00:05:07.526 +++ CONFIG_PGO_USE=n 00:05:07.526 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:07.526 +++ CONFIG_GOLANG=n 00:05:07.526 +++ CONFIG_VHOST=y 00:05:07.526 +++ CONFIG_IDXD=y 00:05:07.526 +++ CONFIG_AVAHI=n 00:05:07.526 +++ CONFIG_URING=n 00:05:07.526 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:07.526 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:07.526 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:07.526 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:07.526 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:07.526 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:07.526 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:07.526 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:07.526 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:07.526 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:07.526 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:07.526 +++ VHOST_APP=("$_app_dir/vhost") 00:05:07.526 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:07.526 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:07.526 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:07.526 +++ [[ #ifndef SPDK_CONFIG_H 00:05:07.526 #define SPDK_CONFIG_H 00:05:07.526 #define SPDK_CONFIG_APPS 1 00:05:07.526 #define SPDK_CONFIG_ARCH native 00:05:07.526 #define SPDK_CONFIG_ASAN 1 00:05:07.526 #undef SPDK_CONFIG_AVAHI 00:05:07.526 #undef SPDK_CONFIG_CET 00:05:07.526 #define SPDK_CONFIG_COVERAGE 1 00:05:07.526 #define SPDK_CONFIG_CROSS_PREFIX 00:05:07.526 #undef SPDK_CONFIG_CRYPTO 00:05:07.526 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:07.526 #undef SPDK_CONFIG_CUSTOMOCF 00:05:07.526 #undef SPDK_CONFIG_DAOS 00:05:07.526 #define SPDK_CONFIG_DAOS_DIR 00:05:07.526 #define SPDK_CONFIG_DEBUG 1 00:05:07.526 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:07.526 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:07.526 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:07.526 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:07.526 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:07.526 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:07.526 #define SPDK_CONFIG_EXAMPLES 1 00:05:07.526 #undef SPDK_CONFIG_FC 00:05:07.526 #define SPDK_CONFIG_FC_PATH 00:05:07.526 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:07.526 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:07.526 #undef SPDK_CONFIG_FUSE 00:05:07.526 #undef SPDK_CONFIG_FUZZER 00:05:07.526 #define SPDK_CONFIG_FUZZER_LIB 00:05:07.526 #undef SPDK_CONFIG_GOLANG 00:05:07.526 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:07.526 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:07.526 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:07.526 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:07.526 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:07.526 #define SPDK_CONFIG_IDXD 1 00:05:07.526 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:07.526 #undef SPDK_CONFIG_IPSEC_MB 00:05:07.526 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:07.526 #define SPDK_CONFIG_ISAL 1 00:05:07.526 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:07.526 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:07.526 #define SPDK_CONFIG_LIBDIR 00:05:07.526 #undef SPDK_CONFIG_LTO 00:05:07.526 #define SPDK_CONFIG_MAX_LCORES 00:05:07.526 #define SPDK_CONFIG_NVME_CUSE 1 00:05:07.526 #undef SPDK_CONFIG_OCF 00:05:07.526 #define SPDK_CONFIG_OCF_PATH 00:05:07.526 #define SPDK_CONFIG_OPENSSL_PATH 00:05:07.526 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:07.526 #undef SPDK_CONFIG_PGO_USE 00:05:07.526 #define SPDK_CONFIG_PREFIX /usr/local 00:05:07.526 #define SPDK_CONFIG_RAID5F 1 00:05:07.526 #undef SPDK_CONFIG_RBD 00:05:07.526 #define SPDK_CONFIG_RDMA 1 00:05:07.526 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:07.526 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:07.526 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:07.526 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:07.526 #undef SPDK_CONFIG_SHARED 00:05:07.526 #undef SPDK_CONFIG_SMA 00:05:07.526 #define SPDK_CONFIG_TESTS 1 00:05:07.526 #undef SPDK_CONFIG_TSAN 00:05:07.526 #undef SPDK_CONFIG_UBLK 00:05:07.526 #define SPDK_CONFIG_UBSAN 1 00:05:07.526 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:07.526 #undef SPDK_CONFIG_URING 00:05:07.526 #define SPDK_CONFIG_URING_PATH 00:05:07.526 #undef SPDK_CONFIG_URING_ZNS 00:05:07.526 #undef SPDK_CONFIG_USDT 00:05:07.526 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:07.526 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:07.526 #undef SPDK_CONFIG_VFIO_USER 00:05:07.526 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:07.526 #define SPDK_CONFIG_VHOST 1 00:05:07.526 #define SPDK_CONFIG_VIRTIO 1 00:05:07.526 #undef SPDK_CONFIG_VTUNE 00:05:07.526 #define SPDK_CONFIG_VTUNE_DIR 00:05:07.526 #define SPDK_CONFIG_WERROR 1 00:05:07.526 #define SPDK_CONFIG_WPDK_DIR 00:05:07.526 #undef SPDK_CONFIG_XNVME 00:05:07.526 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:07.526 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:07.526 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.526 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:07.526 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.526 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.526 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.526 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.526 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.526 ++++ export PATH 00:05:07.526 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:07.526 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:07.526 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:07.526 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:07.526 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:07.526 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:07.526 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:07.526 +++ TEST_TAG=N/A 00:05:07.526 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:07.526 ++ : 1 00:05:07.526 ++ export RUN_NIGHTLY 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_RUN_VALGRIND 00:05:07.526 ++ : 1 00:05:07.526 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:07.526 ++ : 1 00:05:07.526 ++ export SPDK_TEST_UNITTEST 00:05:07.526 ++ : 00:05:07.526 ++ export SPDK_TEST_AUTOBUILD 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_RELEASE_BUILD 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_ISAL 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_ISCSI 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:07.526 ++ : 1 00:05:07.526 ++ export SPDK_TEST_NVME 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVME_PMR 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVME_BP 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVME_CLI 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVME_CUSE 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVME_FDP 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_NVMF 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_VFIOUSER 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_FUZZER 00:05:07.526 ++ : 0 00:05:07.526 ++ export SPDK_TEST_FUZZER_SHORT 00:05:07.526 ++ : rdma 00:05:07.526 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:07.526 ++ : 0 00:05:07.527 ++ export SPDK_TEST_RBD 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_VHOST 00:05:07.527 ++ : 1 00:05:07.527 ++ export SPDK_TEST_BLOCKDEV 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_IOAT 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_BLOBFS 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_VHOST_INIT 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_LVOL 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:07.527 ++ : 1 00:05:07.527 ++ export SPDK_RUN_ASAN 00:05:07.527 ++ : 1 00:05:07.527 ++ export SPDK_RUN_UBSAN 00:05:07.527 ++ : 00:05:07.527 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_RUN_NON_ROOT 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_CRYPTO 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_FTL 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_OCF 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_VMD 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_OPAL 00:05:07.527 ++ : 00:05:07.527 ++ export SPDK_TEST_NATIVE_DPDK 00:05:07.527 ++ : true 00:05:07.527 ++ export SPDK_AUTOTEST_X 00:05:07.527 ++ : 1 00:05:07.527 ++ export SPDK_TEST_RAID5 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_URING 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_USDT 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_USE_IGB_UIO 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_SCHEDULER 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_SCANBUILD 00:05:07.527 ++ : 00:05:07.527 ++ export SPDK_TEST_NVMF_NICS 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_SMA 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_DAOS 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_XNVME 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_ACCEL_DSA 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_ACCEL_IAA 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_ACCEL_IOAT 00:05:07.527 ++ : 00:05:07.527 ++ export SPDK_TEST_FUZZER_TARGET 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_TEST_NVMF_MDNS 00:05:07.527 ++ : 0 00:05:07.527 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:07.527 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:07.527 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:07.527 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:07.527 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:07.527 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:07.527 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:07.527 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:07.527 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:07.527 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:07.527 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:07.527 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:07.527 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:07.527 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:07.527 ++ PYTHONDONTWRITEBYTECODE=1 00:05:07.527 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:07.527 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:07.527 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:07.527 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:07.527 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:07.527 ++ rm -rf /var/tmp/asan_suppression_file 00:05:07.527 ++ cat 00:05:07.527 ++ echo leak:libfuse3.so 00:05:07.527 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:07.527 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:07.527 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:07.527 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:07.527 ++ '[' -z /var/spdk/dependencies ']' 00:05:07.527 ++ export DEPENDENCY_DIR 00:05:07.527 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:07.527 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:07.527 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:07.527 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:07.527 ++ export QEMU_BIN= 00:05:07.527 ++ QEMU_BIN= 00:05:07.527 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:07.527 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:07.527 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:07.527 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:07.527 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:07.527 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:07.527 ++ '[' 0 -eq 0 ']' 00:05:07.527 ++ export valgrind= 00:05:07.527 ++ valgrind= 00:05:07.527 +++ uname -s 00:05:07.527 ++ '[' Linux = Linux ']' 00:05:07.527 ++ HUGEMEM=4096 00:05:07.527 ++ export CLEAR_HUGE=yes 00:05:07.527 ++ CLEAR_HUGE=yes 00:05:07.527 ++ [[ 0 -eq 1 ]] 00:05:07.527 ++ [[ 0 -eq 1 ]] 00:05:07.527 ++ MAKE=make 00:05:07.527 +++ nproc 00:05:07.527 ++ MAKEFLAGS=-j10 00:05:07.527 ++ export HUGEMEM=4096 00:05:07.527 ++ HUGEMEM=4096 00:05:07.527 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:07.527 ++ NO_HUGE=() 00:05:07.527 ++ TEST_MODE= 00:05:07.527 ++ [[ -z '' ]] 00:05:07.527 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:07.527 ++ exec 00:05:07.527 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:07.527 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:07.527 ++ set_test_storage 2147483648 00:05:07.527 ++ [[ -v testdir ]] 00:05:07.527 ++ local requested_size=2147483648 00:05:07.527 ++ local mount target_dir 00:05:07.527 ++ local -A mounts fss sizes avails uses 00:05:07.527 ++ local source fs size avail mount use 00:05:07.527 ++ local storage_fallback storage_candidates 00:05:07.527 +++ mktemp -udt spdk.XXXXXX 00:05:07.527 ++ storage_fallback=/tmp/spdk.DpUw41 00:05:07.527 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:07.527 ++ [[ -n '' ]] 00:05:07.527 ++ [[ -n '' ]] 00:05:07.527 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.DpUw41/tests/unit /tmp/spdk.DpUw41 00:05:07.527 ++ requested_size=2214592512 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 +++ df -T 00:05:07.527 +++ grep -v Filesystem 00:05:07.527 ++ mounts["$mount"]=udev 00:05:07.527 ++ fss["$mount"]=devtmpfs 00:05:07.527 ++ avails["$mount"]=6224457728 00:05:07.527 ++ sizes["$mount"]=6224457728 00:05:07.527 ++ uses["$mount"]=0 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=tmpfs 00:05:07.527 ++ fss["$mount"]=tmpfs 00:05:07.527 ++ avails["$mount"]=1253408768 00:05:07.527 ++ sizes["$mount"]=1254514688 00:05:07.527 ++ uses["$mount"]=1105920 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=/dev/vda1 00:05:07.527 ++ fss["$mount"]=ext4 00:05:07.527 ++ avails["$mount"]=10737545216 00:05:07.527 ++ sizes["$mount"]=20616794112 00:05:07.527 ++ uses["$mount"]=9862471680 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=tmpfs 00:05:07.527 ++ fss["$mount"]=tmpfs 00:05:07.527 ++ avails["$mount"]=6272557056 00:05:07.527 ++ sizes["$mount"]=6272557056 00:05:07.527 ++ uses["$mount"]=0 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=tmpfs 00:05:07.527 ++ fss["$mount"]=tmpfs 00:05:07.527 ++ avails["$mount"]=5242880 00:05:07.527 ++ sizes["$mount"]=5242880 00:05:07.527 ++ uses["$mount"]=0 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=tmpfs 00:05:07.527 ++ fss["$mount"]=tmpfs 00:05:07.527 ++ avails["$mount"]=6272557056 00:05:07.527 ++ sizes["$mount"]=6272557056 00:05:07.527 ++ uses["$mount"]=0 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=/dev/loop0 00:05:07.527 ++ fss["$mount"]=squashfs 00:05:07.527 ++ avails["$mount"]=0 00:05:07.527 ++ sizes["$mount"]=67108864 00:05:07.527 ++ uses["$mount"]=67108864 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=/dev/vda15 00:05:07.527 ++ fss["$mount"]=vfat 00:05:07.527 ++ avails["$mount"]=103089152 00:05:07.527 ++ sizes["$mount"]=109422592 00:05:07.527 ++ uses["$mount"]=6334464 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=/dev/loop2 00:05:07.527 ++ fss["$mount"]=squashfs 00:05:07.527 ++ avails["$mount"]=0 00:05:07.527 ++ sizes["$mount"]=41025536 00:05:07.527 ++ uses["$mount"]=41025536 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=/dev/loop1 00:05:07.527 ++ fss["$mount"]=squashfs 00:05:07.527 ++ avails["$mount"]=0 00:05:07.527 ++ sizes["$mount"]=96337920 00:05:07.527 ++ uses["$mount"]=96337920 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=tmpfs 00:05:07.527 ++ fss["$mount"]=tmpfs 00:05:07.527 ++ avails["$mount"]=1254510592 00:05:07.527 ++ sizes["$mount"]=1254510592 00:05:07.527 ++ uses["$mount"]=0 00:05:07.527 ++ read -r source fs size use avail _ mount 00:05:07.527 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:05:07.527 ++ fss["$mount"]=fuse.sshfs 00:05:07.527 ++ avails["$mount"]=96490774528 00:05:07.527 ++ sizes["$mount"]=105088212992 00:05:07.528 ++ uses["$mount"]=3212005376 00:05:07.528 ++ read -r source fs size use avail _ mount 00:05:07.528 ++ printf '* Looking for test storage...\n' 00:05:07.528 * Looking for test storage... 00:05:07.528 ++ local target_space new_size 00:05:07.528 ++ for target_dir in "${storage_candidates[@]}" 00:05:07.528 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:07.528 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:07.528 ++ mount=/ 00:05:07.528 ++ target_space=10737545216 00:05:07.528 ++ (( target_space == 0 || target_space < requested_size )) 00:05:07.528 ++ (( target_space >= requested_size )) 00:05:07.528 ++ [[ ext4 == tmpfs ]] 00:05:07.528 ++ [[ ext4 == ramfs ]] 00:05:07.528 ++ [[ / == / ]] 00:05:07.528 ++ new_size=12077064192 00:05:07.528 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:07.528 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:07.528 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:07.528 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:07.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:07.528 ++ return 0 00:05:07.528 ++ set -o errtrace 00:05:07.528 ++ shopt -s extdebug 00:05:07.528 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:07.528 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:07.528 10:19:01 -- common/autotest_common.sh@1672 -- # true 00:05:07.528 10:19:01 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:07.528 10:19:01 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:07.528 10:19:01 -- common/autotest_common.sh@29 -- # exec 00:05:07.528 10:19:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:07.528 10:19:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:07.528 10:19:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:07.528 10:19:01 -- common/autotest_common.sh@18 -- # set -x 00:05:07.528 10:19:01 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:07.528 10:19:01 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:07.528 10:19:01 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:07.528 10:19:01 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:07.528 10:19:01 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:07.528 10:19:01 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:07.528 10:19:01 -- unit/unittest.sh@179 -- # hash lcov 00:05:07.528 10:19:01 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:07.528 10:19:01 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:07.528 10:19:01 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:07.528 10:19:01 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:07.528 10:19:01 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:07.528 10:19:01 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:07.528 10:19:01 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:07.528 10:19:01 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:07.528 --rc lcov_branch_coverage=1 00:05:07.528 --rc lcov_function_coverage=1 00:05:07.528 --rc genhtml_branch_coverage=1 00:05:07.528 --rc genhtml_function_coverage=1 00:05:07.528 --rc genhtml_legend=1 00:05:07.528 --rc geninfo_all_blocks=1 00:05:07.528 ' 00:05:07.528 10:19:01 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:07.528 --rc lcov_branch_coverage=1 00:05:07.528 --rc lcov_function_coverage=1 00:05:07.528 --rc genhtml_branch_coverage=1 00:05:07.528 --rc genhtml_function_coverage=1 00:05:07.528 --rc genhtml_legend=1 00:05:07.528 --rc geninfo_all_blocks=1 00:05:07.528 ' 00:05:07.528 10:19:01 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:07.528 --rc lcov_branch_coverage=1 00:05:07.528 --rc lcov_function_coverage=1 00:05:07.528 --rc genhtml_branch_coverage=1 00:05:07.528 --rc genhtml_function_coverage=1 00:05:07.528 --rc genhtml_legend=1 00:05:07.528 --rc geninfo_all_blocks=1 00:05:07.528 --no-external' 00:05:07.528 10:19:01 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:07.528 --rc lcov_branch_coverage=1 00:05:07.528 --rc lcov_function_coverage=1 00:05:07.528 --rc genhtml_branch_coverage=1 00:05:07.528 --rc genhtml_function_coverage=1 00:05:07.528 --rc genhtml_legend=1 00:05:07.528 --rc geninfo_all_blocks=1 00:05:07.528 --no-external' 00:05:07.528 10:19:01 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:09.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:09.431 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:09.432 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:09.432 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:09.432 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:09.690 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:09.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:56.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:56.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:56.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:56.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:56.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:56.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:56.405 10:19:49 -- unit/unittest.sh@206 -- # uname -m 00:05:56.405 10:19:49 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:56.405 10:19:49 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:56.405 10:19:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.405 10:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.405 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 START TEST unittest_pci_event 00:05:56.406 ************************************ 00:05:56.406 10:19:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:56.406 00:05:56.406 00:05:56.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.406 http://cunit.sourceforge.net/ 00:05:56.406 00:05:56.406 00:05:56.406 Suite: pci_event 00:05:56.406 Test: test_pci_parse_event ...[2024-07-12 10:19:49.286095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:56.406 [2024-07-12 10:19:49.287073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:56.406 passed 00:05:56.406 00:05:56.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.406 suites 1 1 n/a 0 0 00:05:56.406 tests 1 1 1 0 0 00:05:56.406 asserts 15 15 15 0 n/a 00:05:56.406 00:05:56.406 Elapsed time = 0.001 seconds 00:05:56.406 ************************************ 00:05:56.406 END TEST unittest_pci_event 00:05:56.406 ************************************ 00:05:56.406 00:05:56.406 real 0m0.043s 00:05:56.406 user 0m0.028s 00:05:56.406 sys 0m0.012s 00:05:56.406 10:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.406 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 10:19:49 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:56.406 10:19:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.406 10:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.406 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 START TEST unittest_include 00:05:56.406 ************************************ 00:05:56.406 10:19:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:56.406 00:05:56.406 00:05:56.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.406 http://cunit.sourceforge.net/ 00:05:56.406 00:05:56.406 00:05:56.406 Suite: histogram 00:05:56.406 Test: histogram_test ...passed 00:05:56.406 Test: histogram_merge ...passed 00:05:56.406 00:05:56.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.406 suites 1 1 n/a 0 0 00:05:56.406 tests 2 2 2 0 0 00:05:56.406 asserts 50 50 50 0 n/a 00:05:56.406 00:05:56.406 Elapsed time = 0.006 seconds 00:05:56.406 00:05:56.406 real 0m0.038s 00:05:56.406 user 0m0.014s 00:05:56.406 sys 0m0.025s 00:05:56.406 10:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.406 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 END TEST unittest_include 00:05:56.406 ************************************ 00:05:56.406 10:19:49 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:56.406 10:19:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.406 10:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.406 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 START TEST unittest_bdev 00:05:56.406 ************************************ 00:05:56.406 10:19:49 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:56.406 10:19:49 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:56.406 00:05:56.406 00:05:56.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.406 http://cunit.sourceforge.net/ 00:05:56.406 00:05:56.406 00:05:56.406 Suite: bdev 00:05:56.406 Test: bytes_to_blocks_test ...passed 00:05:56.406 Test: num_blocks_test ...passed 00:05:56.406 Test: io_valid_test ...passed 00:05:56.406 Test: open_write_test ...[2024-07-12 10:19:49.553723] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:56.406 [2024-07-12 10:19:49.554095] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:56.406 [2024-07-12 10:19:49.554225] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:56.406 passed 00:05:56.406 Test: claim_test ...passed 00:05:56.406 Test: alias_add_del_test ...[2024-07-12 10:19:49.653685] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:56.406 [2024-07-12 10:19:49.653833] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:56.406 [2024-07-12 10:19:49.653899] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:56.406 passed 00:05:56.406 Test: get_device_stat_test ...passed 00:05:56.406 Test: bdev_io_types_test ...passed 00:05:56.406 Test: bdev_io_wait_test ...passed 00:05:56.406 Test: bdev_io_spans_split_test ...passed 00:05:56.406 Test: bdev_io_boundary_split_test ...passed 00:05:56.406 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-12 10:19:49.852877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:56.406 passed 00:05:56.406 Test: bdev_io_mix_split_test ...passed 00:05:56.406 Test: bdev_io_split_with_io_wait ...passed 00:05:56.406 Test: bdev_io_write_unit_split_test ...[2024-07-12 10:19:49.982818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:56.406 [2024-07-12 10:19:49.982921] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:56.406 [2024-07-12 10:19:49.982958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:56.406 [2024-07-12 10:19:49.983006] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:56.406 passed 00:05:56.406 Test: bdev_io_alignment_with_boundary ...passed 00:05:56.406 Test: bdev_io_alignment ...passed 00:05:56.406 Test: bdev_histograms ...passed 00:05:56.406 Test: bdev_write_zeroes ...passed 00:05:56.406 Test: bdev_compare_and_write ...passed 00:05:56.406 Test: bdev_compare ...passed 00:05:56.665 Test: bdev_compare_emulated ...passed 00:05:56.665 Test: bdev_zcopy_write ...passed 00:05:56.665 Test: bdev_zcopy_read ...passed 00:05:56.665 Test: bdev_open_while_hotremove ...passed 00:05:56.665 Test: bdev_close_while_hotremove ...passed 00:05:56.665 Test: bdev_open_ext_test ...[2024-07-12 10:19:50.471530] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:56.665 passed 00:05:56.665 Test: bdev_open_ext_unregister ...passed 00:05:56.665 Test: bdev_set_io_timeout ...[2024-07-12 10:19:50.471763] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:56.665 passed 00:05:56.665 Test: bdev_set_qd_sampling ...passed 00:05:56.665 Test: lba_range_overlap ...passed 00:05:56.923 Test: lock_lba_range_check_ranges ...passed 00:05:56.923 Test: lock_lba_range_with_io_outstanding ...passed 00:05:56.923 Test: lock_lba_range_overlapped ...passed 00:05:56.923 Test: bdev_quiesce ...[2024-07-12 10:19:50.704752] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:56.923 passed 00:05:56.923 Test: bdev_io_abort ...passed 00:05:56.923 Test: bdev_unmap ...passed 00:05:57.181 Test: bdev_write_zeroes_split_test ...passed 00:05:57.181 Test: bdev_set_options_test ...[2024-07-12 10:19:50.855862] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:57.181 passed 00:05:57.181 Test: bdev_get_memory_domains ...passed 00:05:57.181 Test: bdev_io_ext ...passed 00:05:57.181 Test: bdev_io_ext_no_opts ...passed 00:05:57.181 Test: bdev_io_ext_invalid_opts ...passed 00:05:57.181 Test: bdev_io_ext_split ...passed 00:05:57.181 Test: bdev_io_ext_bounce_buffer ...passed 00:05:57.181 Test: bdev_register_uuid_alias ...[2024-07-12 10:19:51.092355] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name c721d5e7-c790-4353-a98d-6cea97bdc0d8 already exists 00:05:57.181 [2024-07-12 10:19:51.092455] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:c721d5e7-c790-4353-a98d-6cea97bdc0d8 alias for bdev bdev0 00:05:57.449 passed 00:05:57.449 Test: bdev_unregister_by_name ...passed 00:05:57.449 Test: for_each_bdev_test ...[2024-07-12 10:19:51.116045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:57.449 [2024-07-12 10:19:51.116125] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:57.449 passed 00:05:57.449 Test: bdev_seek_test ...passed 00:05:57.449 Test: bdev_copy ...passed 00:05:57.449 Test: bdev_copy_split_test ...passed 00:05:57.449 Test: examine_locks ...passed 00:05:57.449 Test: claim_v2_rwo ...[2024-07-12 10:19:51.249204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249287] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249308] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249367] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249386] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:57.449 passed 00:05:57.449 Test: claim_v2_rom ...[2024-07-12 10:19:51.249646] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249713] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249741] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.249774] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:57.449 passed 00:05:57.449 Test: claim_v2_rwm ...[2024-07-12 10:19:51.249831] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:57.449 [2024-07-12 10:19:51.249875] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:57.449 [2024-07-12 10:19:51.249993] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:57.449 [2024-07-12 10:19:51.250058] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250095] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250120] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:57.449 passed 00:05:57.449 Test: claim_v2_existing_writer ...[2024-07-12 10:19:51.250137] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:57.449 passed 00:05:57.449 Test: claim_v2_existing_v1 ...passed 00:05:57.449 Test: claim_v1_existing_v2 ...[2024-07-12 10:19:51.250406] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:57.449 [2024-07-12 10:19:51.250449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:57.449 [2024-07-12 10:19:51.250578] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250610] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:57.449 passed 00:05:57.449 Test: examine_claimed ...[2024-07-12 10:19:51.250759] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250824] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:57.449 [2024-07-12 10:19:51.250860] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:57.450 passed 00:05:57.450 00:05:57.450 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.450 suites 1 1 n/a 0 0 00:05:57.450 tests 59 59 59 0 0 00:05:57.450 asserts 4599 4599 4599 0 n/a 00:05:57.450 00:05:57.450 Elapsed time = 1.776 seconds[2024-07-12 10:19:51.251270] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:57.450 00:05:57.450 10:19:51 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:57.450 00:05:57.450 00:05:57.450 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.450 http://cunit.sourceforge.net/ 00:05:57.450 00:05:57.450 00:05:57.450 Suite: nvme 00:05:57.450 Test: test_create_ctrlr ...passed 00:05:57.450 Test: test_reset_ctrlr ...[2024-07-12 10:19:51.301681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:57.450 Test: test_failover_ctrlr ...passed 00:05:57.450 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-12 10:19:51.304462] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.304680] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.304901] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_pending_reset ...[2024-07-12 10:19:51.306513] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.306788] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_attach_ctrlr ...[2024-07-12 10:19:51.308018] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:57.450 passed 00:05:57.450 Test: test_aer_cb ...passed 00:05:57.450 Test: test_submit_nvme_cmd ...passed 00:05:57.450 Test: test_add_remove_trid ...passed 00:05:57.450 Test: test_abort ...[2024-07-12 10:19:51.311738] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:57.450 passed 00:05:57.450 Test: test_get_io_qpair ...passed 00:05:57.450 Test: test_bdev_unregister ...passed 00:05:57.450 Test: test_compare_ns ...passed 00:05:57.450 Test: test_init_ana_log_page ...passed 00:05:57.450 Test: test_get_memory_domains ...passed 00:05:57.450 Test: test_reconnect_qpair ...[2024-07-12 10:19:51.314628] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_create_bdev_ctrlr ...[2024-07-12 10:19:51.315238] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:57.450 passed 00:05:57.450 Test: test_add_multi_ns_to_bdev ...[2024-07-12 10:19:51.316703] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:57.450 passed 00:05:57.450 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:57.450 Test: test_admin_path ...passed 00:05:57.450 Test: test_reset_bdev_ctrlr ...passed 00:05:57.450 Test: test_find_io_path ...passed 00:05:57.450 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:57.450 Test: test_retry_io_for_io_path_error ...passed 00:05:57.450 Test: test_retry_io_count ...passed 00:05:57.450 Test: test_concurrent_read_ana_log_page ...passed 00:05:57.450 Test: test_retry_io_for_ana_error ...passed 00:05:57.450 Test: test_check_io_error_resiliency_params ...[2024-07-12 10:19:51.324358] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:57.450 [2024-07-12 10:19:51.324440] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:57.450 [2024-07-12 10:19:51.324468] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:57.450 [2024-07-12 10:19:51.324503] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:57.450 [2024-07-12 10:19:51.324525] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:57.450 [2024-07-12 10:19:51.324563] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:57.450 passed 00:05:57.450 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-12 10:19:51.324584] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:57.450 [2024-07-12 10:19:51.324639] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:57.450 [2024-07-12 10:19:51.324675] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:57.450 passed 00:05:57.450 Test: test_reconnect_ctrlr ...[2024-07-12 10:19:51.325543] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.325716] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.326021] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.326180] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.326389] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_retry_failover_ctrlr ...[2024-07-12 10:19:51.326759] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_fail_path ...[2024-07-12 10:19:51.327418] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.327585] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.327686] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.327798] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.327960] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_nvme_ns_cmp ...passed 00:05:57.450 Test: test_ana_transition ...passed 00:05:57.450 Test: test_set_preferred_path ...passed 00:05:57.450 Test: test_find_next_io_path ...passed 00:05:57.450 Test: test_find_io_path_min_qd ...passed 00:05:57.450 Test: test_disable_auto_failback ...[2024-07-12 10:19:51.329883] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_set_multipath_policy ...passed 00:05:57.450 Test: test_uuid_generation ...passed 00:05:57.450 Test: test_retry_io_to_same_path ...passed 00:05:57.450 Test: test_race_between_reset_and_disconnected ...passed 00:05:57.450 Test: test_ctrlr_op_rpc ...passed 00:05:57.450 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:57.450 Test: test_disable_enable_ctrlr ...[2024-07-12 10:19:51.333873] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 [2024-07-12 10:19:51.334059] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:57.450 passed 00:05:57.450 Test: test_delete_ctrlr_done ...passed 00:05:57.450 Test: test_ns_remove_during_reset ...passed 00:05:57.450 00:05:57.450 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.450 suites 1 1 n/a 0 0 00:05:57.450 tests 48 48 48 0 0 00:05:57.451 asserts 3553 3553 3553 0 n/a 00:05:57.451 00:05:57.451 Elapsed time = 0.035 seconds 00:05:57.451 10:19:51 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:57.721 Test Options 00:05:57.721 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:57.721 00:05:57.721 00:05:57.721 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.721 http://cunit.sourceforge.net/ 00:05:57.721 00:05:57.721 00:05:57.721 Suite: raid 00:05:57.721 Test: test_create_raid ...passed 00:05:57.721 Test: test_create_raid_superblock ...passed 00:05:57.721 Test: test_delete_raid ...passed 00:05:57.721 Test: test_create_raid_invalid_args ...[2024-07-12 10:19:51.377009] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:57.721 [2024-07-12 10:19:51.377434] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:57.721 [2024-07-12 10:19:51.377918] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:57.721 [2024-07-12 10:19:51.378180] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:57.721 [2024-07-12 10:19:51.379021] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:57.721 passed 00:05:57.721 Test: test_delete_raid_invalid_args ...passed 00:05:57.721 Test: test_io_channel ...passed 00:05:57.721 Test: test_reset_io ...passed 00:05:57.721 Test: test_write_io ...passed 00:05:57.721 Test: test_read_io ...passed 00:05:58.676 Test: test_unmap_io ...passed 00:05:58.676 Test: test_io_failure ...[2024-07-12 10:19:52.478676] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:58.676 passed 00:05:58.676 Test: test_multi_raid_no_io ...passed 00:05:58.676 Test: test_multi_raid_with_io ...passed 00:05:58.676 Test: test_io_type_supported ...passed 00:05:58.676 Test: test_raid_json_dump_info ...passed 00:05:58.676 Test: test_context_size ...passed 00:05:58.676 Test: test_raid_level_conversions ...passed 00:05:58.676 Test: test_raid_process ...passed 00:05:58.676 Test: test_raid_io_split ...passed 00:05:58.676 00:05:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.676 suites 1 1 n/a 0 0 00:05:58.676 tests 19 19 19 0 0 00:05:58.676 asserts 177879 177879 177879 0 n/a 00:05:58.676 00:05:58.676 Elapsed time = 1.116 seconds 00:05:58.676 10:19:52 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:58.676 00:05:58.676 00:05:58.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.676 http://cunit.sourceforge.net/ 00:05:58.676 00:05:58.676 00:05:58.676 Suite: raid_sb 00:05:58.676 Test: test_raid_bdev_write_superblock ...passed 00:05:58.676 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:58.676 Test: test_raid_bdev_parse_superblock ...[2024-07-12 10:19:52.532513] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:58.676 passed 00:05:58.676 00:05:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.676 suites 1 1 n/a 0 0 00:05:58.676 tests 3 3 3 0 0 00:05:58.676 asserts 32 32 32 0 n/a 00:05:58.676 00:05:58.676 Elapsed time = 0.001 seconds 00:05:58.676 10:19:52 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:58.676 00:05:58.676 00:05:58.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.676 http://cunit.sourceforge.net/ 00:05:58.676 00:05:58.676 00:05:58.676 Suite: concat 00:05:58.676 Test: test_concat_start ...passed 00:05:58.676 Test: test_concat_rw ...passed 00:05:58.676 Test: test_concat_null_payload ...passed 00:05:58.676 00:05:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.676 suites 1 1 n/a 0 0 00:05:58.676 tests 3 3 3 0 0 00:05:58.676 asserts 8097 8097 8097 0 n/a 00:05:58.676 00:05:58.676 Elapsed time = 0.008 seconds 00:05:58.676 10:19:52 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:58.934 00:05:58.934 00:05:58.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.934 http://cunit.sourceforge.net/ 00:05:58.934 00:05:58.934 00:05:58.934 Suite: raid1 00:05:58.934 Test: test_raid1_start ...passed 00:05:58.934 Test: test_raid1_read_balancing ...passed 00:05:58.934 00:05:58.934 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.934 suites 1 1 n/a 0 0 00:05:58.934 tests 2 2 2 0 0 00:05:58.934 asserts 2856 2856 2856 0 n/a 00:05:58.934 00:05:58.934 Elapsed time = 0.004 seconds 00:05:58.934 10:19:52 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:58.934 00:05:58.934 00:05:58.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.934 http://cunit.sourceforge.net/ 00:05:58.934 00:05:58.934 00:05:58.934 Suite: zone 00:05:58.934 Test: test_zone_get_operation ...passed 00:05:58.934 Test: test_bdev_zone_get_info ...passed 00:05:58.934 Test: test_bdev_zone_management ...passed 00:05:58.934 Test: test_bdev_zone_append ...passed 00:05:58.934 Test: test_bdev_zone_append_with_md ...passed 00:05:58.934 Test: test_bdev_zone_appendv ...passed 00:05:58.934 Test: test_bdev_zone_appendv_with_md ...passed 00:05:58.934 Test: test_bdev_io_get_append_location ...passed 00:05:58.934 00:05:58.934 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.934 suites 1 1 n/a 0 0 00:05:58.934 tests 8 8 8 0 0 00:05:58.934 asserts 94 94 94 0 n/a 00:05:58.934 00:05:58.934 Elapsed time = 0.001 seconds 00:05:58.934 10:19:52 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:58.934 00:05:58.934 00:05:58.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.934 http://cunit.sourceforge.net/ 00:05:58.934 00:05:58.934 00:05:58.934 Suite: gpt_parse 00:05:58.934 Test: test_parse_mbr_and_primary ...[2024-07-12 10:19:52.682468] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:58.934 [2024-07-12 10:19:52.682933] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:58.934 [2024-07-12 10:19:52.682995] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:58.934 [2024-07-12 10:19:52.683087] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:58.934 [2024-07-12 10:19:52.683147] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:58.934 [2024-07-12 10:19:52.683240] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:58.934 passed 00:05:58.934 Test: test_parse_secondary ...[2024-07-12 10:19:52.684511] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:58.935 [2024-07-12 10:19:52.684613] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:58.935 [2024-07-12 10:19:52.684666] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:58.935 [2024-07-12 10:19:52.684706] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:58.935 passed 00:05:58.935 Test: test_check_mbr ...[2024-07-12 10:19:52.685591] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:58.935 passed 00:05:58.935 Test: test_read_header ...[2024-07-12 10:19:52.685641] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:58.935 [2024-07-12 10:19:52.685691] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:58.935 [2024-07-12 10:19:52.685781] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:58.935 [2024-07-12 10:19:52.685845] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:58.935 [2024-07-12 10:19:52.685880] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:58.935 [2024-07-12 10:19:52.685905] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:58.935 [2024-07-12 10:19:52.685930] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:58.935 passed 00:05:58.935 Test: test_read_partitions ...[2024-07-12 10:19:52.685989] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:58.935 [2024-07-12 10:19:52.686030] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:58.935 [2024-07-12 10:19:52.686057] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:58.935 [2024-07-12 10:19:52.686075] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:58.935 [2024-07-12 10:19:52.686451] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:58.935 passed 00:05:58.935 00:05:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.935 suites 1 1 n/a 0 0 00:05:58.935 tests 5 5 5 0 0 00:05:58.935 asserts 33 33 33 0 n/a 00:05:58.935 00:05:58.935 Elapsed time = 0.005 seconds 00:05:58.935 10:19:52 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:58.935 00:05:58.935 00:05:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.935 http://cunit.sourceforge.net/ 00:05:58.935 00:05:58.935 00:05:58.935 Suite: bdev_part 00:05:58.935 Test: part_test ...[2024-07-12 10:19:52.727227] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:58.935 passed 00:05:58.935 Test: part_free_test ...passed 00:05:58.935 Test: part_get_io_channel_test ...passed 00:05:58.935 Test: part_construct_ext ...passed 00:05:58.935 00:05:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.935 suites 1 1 n/a 0 0 00:05:58.935 tests 4 4 4 0 0 00:05:58.935 asserts 48 48 48 0 n/a 00:05:58.935 00:05:58.935 Elapsed time = 0.056 seconds 00:05:58.935 10:19:52 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:58.935 00:05:58.935 00:05:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.935 http://cunit.sourceforge.net/ 00:05:58.935 00:05:58.935 00:05:58.935 Suite: scsi_nvme_suite 00:05:58.935 Test: scsi_nvme_translate_test ...passed 00:05:58.935 00:05:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.935 suites 1 1 n/a 0 0 00:05:58.935 tests 1 1 1 0 0 00:05:58.935 asserts 104 104 104 0 n/a 00:05:58.935 00:05:58.935 Elapsed time = 0.000 seconds 00:05:58.935 10:19:52 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:58.935 00:05:58.935 00:05:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.935 http://cunit.sourceforge.net/ 00:05:58.935 00:05:58.935 00:05:58.935 Suite: lvol 00:05:58.935 Test: ut_lvs_init ...[2024-07-12 10:19:52.858200] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:58.935 passed 00:05:58.935 Test: ut_lvol_init ...[2024-07-12 10:19:52.858658] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:58.935 passed 00:05:58.935 Test: ut_lvol_snapshot ...passed 00:05:58.935 Test: ut_lvol_clone ...passed 00:05:58.935 Test: ut_lvs_destroy ...passed 00:05:58.935 Test: ut_lvs_unload ...passed 00:05:58.935 Test: ut_lvol_resize ...passed 00:05:58.935 Test: ut_lvol_set_read_only ...[2024-07-12 10:19:52.860254] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:58.935 passed 00:05:58.935 Test: ut_lvol_hotremove ...passed 00:05:58.935 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:58.935 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:58.935 Test: ut_lvol_read_write ...passed 00:05:58.935 Test: ut_vbdev_lvol_submit_request ...passed 00:05:58.935 Test: ut_lvol_examine_config ...passed 00:05:58.935 Test: ut_lvol_examine_disk ...[2024-07-12 10:19:52.861071] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:58.935 passed 00:05:58.935 Test: ut_lvol_rename ...[2024-07-12 10:19:52.862227] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:58.935 [2024-07-12 10:19:52.862337] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:58.935 passed 00:05:58.935 Test: ut_bdev_finish ...passed 00:05:58.935 Test: ut_lvs_rename ...passed 00:05:58.935 Test: ut_lvol_seek ...passed 00:05:58.935 Test: ut_esnap_dev_create ...[2024-07-12 10:19:52.863072] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:58.935 [2024-07-12 10:19:52.863178] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:58.935 [2024-07-12 10:19:52.863208] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:58.935 [2024-07-12 10:19:52.863258] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:58.935 passed 00:05:58.935 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-12 10:19:52.863459] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:58.935 [2024-07-12 10:19:52.863495] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:58.935 passed 00:05:58.935 00:05:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.935 suites 1 1 n/a 0 0 00:05:58.935 tests 21 21 21 0 0 00:05:58.935 asserts 712 712 712 0 n/a 00:05:58.935 00:05:58.935 Elapsed time = 0.006 seconds 00:05:59.195 10:19:52 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:59.195 00:05:59.195 00:05:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.195 http://cunit.sourceforge.net/ 00:05:59.195 00:05:59.195 00:05:59.195 Suite: zone_block 00:05:59.195 Test: test_zone_block_create ...passed 00:05:59.195 Test: test_zone_block_create_invalid ...[2024-07-12 10:19:52.923404] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:59.195 [2024-07-12 10:19:52.923801] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 10:19:52.923975] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:59.195 [2024-07-12 10:19:52.924034] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 10:19:52.924181] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:59.195 [2024-07-12 10:19:52.924218] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-12 10:19:52.924309] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:59.195 [2024-07-12 10:19:52.924354] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:59.195 Test: test_get_zone_info ...[2024-07-12 10:19:52.924927] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.924991] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.925040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_supported_io_types ...passed 00:05:59.195 Test: test_reset_zone ...[2024-07-12 10:19:52.925931] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.926000] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_open_zone ...[2024-07-12 10:19:52.926491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.927221] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.927306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_zone_write ...[2024-07-12 10:19:52.927861] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:59.195 [2024-07-12 10:19:52.927934] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.928002] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:59.195 [2024-07-12 10:19:52.928052] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.934112] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:59.195 [2024-07-12 10:19:52.934176] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.934261] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:59.195 [2024-07-12 10:19:52.934286] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.940514] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:59.195 [2024-07-12 10:19:52.940587] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_zone_read ...[2024-07-12 10:19:52.941113] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:59.195 [2024-07-12 10:19:52.941171] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.941262] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:59.195 [2024-07-12 10:19:52.941302] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.941844] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:59.195 [2024-07-12 10:19:52.941892] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_close_zone ...[2024-07-12 10:19:52.942311] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.942400] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.942656] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.942722] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_finish_zone ...[2024-07-12 10:19:52.943438] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.943497] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 Test: test_append_zone ...[2024-07-12 10:19:52.943937] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:59.195 [2024-07-12 10:19:52.943981] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.944029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:59.195 [2024-07-12 10:19:52.944058] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 [2024-07-12 10:19:52.957008] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:59.195 [2024-07-12 10:19:52.957069] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:59.195 passed 00:05:59.195 00:05:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.195 suites 1 1 n/a 0 0 00:05:59.195 tests 11 11 11 0 0 00:05:59.195 asserts 3437 3437 3437 0 n/a 00:05:59.195 00:05:59.195 Elapsed time = 0.035 seconds 00:05:59.195 10:19:53 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:59.195 00:05:59.195 00:05:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.195 http://cunit.sourceforge.net/ 00:05:59.195 00:05:59.195 00:05:59.195 Suite: bdev 00:05:59.195 Test: basic ...[2024-07-12 10:19:53.062606] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55efd4bc7401): Operation not permitted (rc=-1) 00:05:59.195 [2024-07-12 10:19:53.063024] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55efd4bc73c0): Operation not permitted (rc=-1) 00:05:59.195 [2024-07-12 10:19:53.063073] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55efd4bc7401): Operation not permitted (rc=-1) 00:05:59.195 passed 00:05:59.453 Test: unregister_and_close ...passed 00:05:59.453 Test: unregister_and_close_different_threads ...passed 00:05:59.453 Test: basic_qos ...passed 00:05:59.453 Test: put_channel_during_reset ...passed 00:05:59.453 Test: aborted_reset ...passed 00:05:59.710 Test: aborted_reset_no_outstanding_io ...passed 00:05:59.710 Test: io_during_reset ...passed 00:05:59.710 Test: reset_completions ...passed 00:05:59.711 Test: io_during_qos_queue ...passed 00:05:59.711 Test: io_during_qos_reset ...passed 00:05:59.968 Test: enomem ...passed 00:05:59.968 Test: enomem_multi_bdev ...passed 00:05:59.968 Test: enomem_multi_bdev_unregister ...passed 00:05:59.968 Test: enomem_multi_io_target ...passed 00:05:59.968 Test: qos_dynamic_enable ...passed 00:05:59.968 Test: bdev_histograms_mt ...passed 00:05:59.968 Test: bdev_set_io_timeout_mt ...[2024-07-12 10:19:53.887336] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:59.968 passed 00:06:00.225 Test: lock_lba_range_then_submit_io ...[2024-07-12 10:19:53.906091] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55efd4bc7380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:00.225 passed 00:06:00.225 Test: unregister_during_reset ...passed 00:06:00.225 Test: event_notify_and_close ...passed 00:06:00.225 Test: unregister_and_qos_poller ...passed 00:06:00.225 Suite: bdev_wrong_thread 00:06:00.225 Test: spdk_bdev_register_wt ...passed 00:06:00.225 Test: spdk_bdev_examine_wt ...[2024-07-12 10:19:54.055812] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:00.225 [2024-07-12 10:19:54.056188] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:00.225 passed 00:06:00.225 00:06:00.225 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.225 suites 2 2 n/a 0 0 00:06:00.225 tests 24 24 24 0 0 00:06:00.225 asserts 621 621 621 0 n/a 00:06:00.225 00:06:00.225 Elapsed time = 1.024 seconds 00:06:00.225 00:06:00.225 real 0m4.633s 00:06:00.225 user 0m2.014s 00:06:00.225 sys 0m2.621s 00:06:00.225 ************************************ 00:06:00.225 END TEST unittest_bdev 00:06:00.225 ************************************ 00:06:00.225 10:19:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.225 10:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:00.225 10:19:54 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:00.225 10:19:54 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:00.225 10:19:54 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:00.225 10:19:54 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:00.225 10:19:54 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:00.225 10:19:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.225 10:19:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.225 10:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:00.225 ************************************ 00:06:00.225 START TEST unittest_bdev_raid5f 00:06:00.225 ************************************ 00:06:00.225 10:19:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:00.483 00:06:00.483 00:06:00.483 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.483 http://cunit.sourceforge.net/ 00:06:00.483 00:06:00.483 00:06:00.483 Suite: raid5f 00:06:00.483 Test: test_raid5f_start ...passed 00:06:01.048 Test: test_raid5f_submit_read_request ...passed 00:06:01.048 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:05.232 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:23.308 Test: test_raid5f_chunk_write_error ...passed 00:06:29.917 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:31.831 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:58.378 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:58.378 00:06:58.378 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.378 suites 1 1 n/a 0 0 00:06:58.379 tests 8 8 8 0 0 00:06:58.379 asserts 351864 351864 351864 0 n/a 00:06:58.379 00:06:58.379 Elapsed time = 58.064 seconds 00:06:58.379 00:06:58.379 real 0m58.152s 00:06:58.379 user 0m55.225s 00:06:58.379 sys 0m2.908s 00:06:58.379 10:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.379 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 ************************************ 00:06:58.379 END TEST unittest_bdev_raid5f 00:06:58.379 ************************************ 00:06:58.637 10:20:52 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:58.637 10:20:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.637 10:20:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.637 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:58.637 ************************************ 00:06:58.637 START TEST unittest_blob_blobfs 00:06:58.637 ************************************ 00:06:58.637 10:20:52 -- common/autotest_common.sh@1104 -- # unittest_blob 00:06:58.637 10:20:52 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:58.637 10:20:52 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:58.637 00:06:58.637 00:06:58.637 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.637 http://cunit.sourceforge.net/ 00:06:58.637 00:06:58.637 00:06:58.637 Suite: blob_nocopy_noextent 00:06:58.637 Test: blob_init ...[2024-07-12 10:20:52.373826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:58.637 passed 00:06:58.637 Test: blob_thin_provision ...passed 00:06:58.637 Test: blob_read_only ...passed 00:06:58.637 Test: bs_load ...[2024-07-12 10:20:52.476535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:58.637 passed 00:06:58.637 Test: bs_load_custom_cluster_size ...passed 00:06:58.637 Test: bs_load_after_failed_grow ...passed 00:06:58.637 Test: bs_cluster_sz ...[2024-07-12 10:20:52.513821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:58.637 [2024-07-12 10:20:52.514338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:58.637 [2024-07-12 10:20:52.514682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:58.637 passed 00:06:58.637 Test: bs_resize_md ...passed 00:06:58.637 Test: bs_destroy ...passed 00:06:58.895 Test: bs_type ...passed 00:06:58.895 Test: bs_super_block ...passed 00:06:58.895 Test: bs_test_recover_cluster_count ...passed 00:06:58.895 Test: bs_grow_live ...passed 00:06:58.895 Test: bs_grow_live_no_space ...passed 00:06:58.895 Test: bs_test_grow ...passed 00:06:58.895 Test: blob_serialize_test ...passed 00:06:58.895 Test: super_block_crc ...passed 00:06:58.895 Test: blob_thin_prov_write_count_io ...passed 00:06:58.895 Test: bs_load_iter_test ...passed 00:06:58.895 Test: blob_relations ...[2024-07-12 10:20:52.701097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.701368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 [2024-07-12 10:20:52.702303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.702531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 passed 00:06:58.895 Test: blob_relations2 ...[2024-07-12 10:20:52.716151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.716365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 [2024-07-12 10:20:52.716455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.716559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 [2024-07-12 10:20:52.717980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.718164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 [2024-07-12 10:20:52.718653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:58.895 [2024-07-12 10:20:52.718822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:58.895 passed 00:06:58.895 Test: blob_relations3 ...passed 00:06:59.153 Test: blobstore_clean_power_failure ...passed 00:06:59.153 Test: blob_delete_snapshot_power_failure ...[2024-07-12 10:20:52.886626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.153 [2024-07-12 10:20:52.902057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:59.153 [2024-07-12 10:20:52.902355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:59.154 [2024-07-12 10:20:52.902443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.154 [2024-07-12 10:20:52.917303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.154 [2024-07-12 10:20:52.917710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:59.154 [2024-07-12 10:20:52.917810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:59.154 [2024-07-12 10:20:52.918035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.154 [2024-07-12 10:20:52.932849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:59.154 [2024-07-12 10:20:52.933278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.154 [2024-07-12 10:20:52.946833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:59.154 [2024-07-12 10:20:52.947199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.154 [2024-07-12 10:20:52.961150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:59.154 [2024-07-12 10:20:52.961557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.154 passed 00:06:59.154 Test: blob_create_snapshot_power_failure ...[2024-07-12 10:20:53.005657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:59.154 [2024-07-12 10:20:53.034446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.154 [2024-07-12 10:20:53.048671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:59.411 passed 00:06:59.411 Test: blob_io_unit ...passed 00:06:59.411 Test: blob_io_unit_compatibility ...passed 00:06:59.411 Test: blob_ext_md_pages ...passed 00:06:59.411 Test: blob_esnap_io_4096_4096 ...passed 00:06:59.411 Test: blob_esnap_io_512_512 ...passed 00:06:59.411 Test: blob_esnap_io_4096_512 ...passed 00:06:59.411 Test: blob_esnap_io_512_4096 ...passed 00:06:59.411 Suite: blob_bs_nocopy_noextent 00:06:59.411 Test: blob_open ...passed 00:06:59.411 Test: blob_create ...[2024-07-12 10:20:53.301071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:59.411 passed 00:06:59.669 Test: blob_create_loop ...passed 00:06:59.669 Test: blob_create_fail ...[2024-07-12 10:20:53.410467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:59.669 passed 00:06:59.669 Test: blob_create_internal ...passed 00:06:59.669 Test: blob_create_zero_extent ...passed 00:06:59.669 Test: blob_snapshot ...passed 00:06:59.669 Test: blob_clone ...passed 00:06:59.926 Test: blob_inflate ...[2024-07-12 10:20:53.611373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:59.926 passed 00:06:59.926 Test: blob_delete ...passed 00:06:59.926 Test: blob_resize_test ...[2024-07-12 10:20:53.687623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:59.926 passed 00:06:59.926 Test: channel_ops ...passed 00:06:59.926 Test: blob_super ...passed 00:06:59.926 Test: blob_rw_verify_iov ...passed 00:07:00.184 Test: blob_unmap ...passed 00:07:00.184 Test: blob_iter ...passed 00:07:00.184 Test: blob_parse_md ...passed 00:07:00.184 Test: bs_load_pending_removal ...passed 00:07:00.184 Test: bs_unload ...[2024-07-12 10:20:54.000036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:00.184 passed 00:07:00.184 Test: bs_usable_clusters ...passed 00:07:00.184 Test: blob_crc ...[2024-07-12 10:20:54.063744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:00.184 [2024-07-12 10:20:54.064161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:00.184 passed 00:07:00.184 Test: blob_flags ...passed 00:07:00.443 Test: bs_version ...passed 00:07:00.443 Test: blob_set_xattrs_test ...[2024-07-12 10:20:54.159454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:00.443 [2024-07-12 10:20:54.159823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:00.443 passed 00:07:00.443 Test: blob_thin_prov_alloc ...passed 00:07:00.443 Test: blob_insert_cluster_msg_test ...passed 00:07:00.714 Test: blob_thin_prov_rw ...passed 00:07:00.714 Test: blob_thin_prov_rle ...passed 00:07:00.714 Test: blob_thin_prov_rw_iov ...passed 00:07:00.714 Test: blob_snapshot_rw ...passed 00:07:00.714 Test: blob_snapshot_rw_iov ...passed 00:07:00.986 Test: blob_inflate_rw ...passed 00:07:00.986 Test: blob_snapshot_freeze_io ...passed 00:07:01.245 Test: blob_operation_split_rw ...passed 00:07:01.245 Test: blob_operation_split_rw_iov ...passed 00:07:01.245 Test: blob_simultaneous_operations ...[2024-07-12 10:20:55.089361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.245 [2024-07-12 10:20:55.089761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.245 [2024-07-12 10:20:55.090918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.245 [2024-07-12 10:20:55.091128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.245 [2024-07-12 10:20:55.101863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.245 [2024-07-12 10:20:55.102095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.245 [2024-07-12 10:20:55.102265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.245 [2024-07-12 10:20:55.102482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.245 passed 00:07:01.245 Test: blob_persist_test ...passed 00:07:01.502 Test: blob_decouple_snapshot ...passed 00:07:01.502 Test: blob_seek_io_unit ...passed 00:07:01.502 Test: blob_nested_freezes ...passed 00:07:01.502 Suite: blob_blob_nocopy_noextent 00:07:01.502 Test: blob_write ...passed 00:07:01.502 Test: blob_read ...passed 00:07:01.502 Test: blob_rw_verify ...passed 00:07:01.760 Test: blob_rw_verify_iov_nomem ...passed 00:07:01.760 Test: blob_rw_iov_read_only ...passed 00:07:01.760 Test: blob_xattr ...passed 00:07:01.760 Test: blob_dirty_shutdown ...passed 00:07:01.760 Test: blob_is_degraded ...passed 00:07:01.760 Suite: blob_esnap_bs_nocopy_noextent 00:07:01.760 Test: blob_esnap_create ...passed 00:07:01.760 Test: blob_esnap_thread_add_remove ...passed 00:07:02.019 Test: blob_esnap_clone_snapshot ...passed 00:07:02.019 Test: blob_esnap_clone_inflate ...passed 00:07:02.019 Test: blob_esnap_clone_decouple ...passed 00:07:02.019 Test: blob_esnap_clone_reload ...passed 00:07:02.019 Test: blob_esnap_hotplug ...passed 00:07:02.019 Suite: blob_nocopy_extent 00:07:02.019 Test: blob_init ...[2024-07-12 10:20:55.877921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:02.019 passed 00:07:02.019 Test: blob_thin_provision ...passed 00:07:02.019 Test: blob_read_only ...passed 00:07:02.019 Test: bs_load ...[2024-07-12 10:20:55.930990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:02.019 passed 00:07:02.019 Test: bs_load_custom_cluster_size ...passed 00:07:02.277 Test: bs_load_after_failed_grow ...passed 00:07:02.277 Test: bs_cluster_sz ...[2024-07-12 10:20:55.961354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:02.277 [2024-07-12 10:20:55.961705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:02.277 [2024-07-12 10:20:55.961889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:02.277 passed 00:07:02.277 Test: bs_resize_md ...passed 00:07:02.277 Test: bs_destroy ...passed 00:07:02.277 Test: bs_type ...passed 00:07:02.277 Test: bs_super_block ...passed 00:07:02.277 Test: bs_test_recover_cluster_count ...passed 00:07:02.277 Test: bs_grow_live ...passed 00:07:02.277 Test: bs_grow_live_no_space ...passed 00:07:02.277 Test: bs_test_grow ...passed 00:07:02.277 Test: blob_serialize_test ...passed 00:07:02.277 Test: super_block_crc ...passed 00:07:02.277 Test: blob_thin_prov_write_count_io ...passed 00:07:02.277 Test: bs_load_iter_test ...passed 00:07:02.277 Test: blob_relations ...[2024-07-12 10:20:56.129529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.129855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 [2024-07-12 10:20:56.131105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.131322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 passed 00:07:02.277 Test: blob_relations2 ...[2024-07-12 10:20:56.147008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.147276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 [2024-07-12 10:20:56.147382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.147588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 [2024-07-12 10:20:56.149598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.149800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 [2024-07-12 10:20:56.150426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.277 [2024-07-12 10:20:56.150615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.277 passed 00:07:02.277 Test: blob_relations3 ...passed 00:07:02.535 Test: blobstore_clean_power_failure ...passed 00:07:02.535 Test: blob_delete_snapshot_power_failure ...[2024-07-12 10:20:56.318607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.535 [2024-07-12 10:20:56.332819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.535 [2024-07-12 10:20:56.347218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:02.535 [2024-07-12 10:20:56.347569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.535 [2024-07-12 10:20:56.347653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.535 [2024-07-12 10:20:56.362306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.535 [2024-07-12 10:20:56.362648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:02.535 [2024-07-12 10:20:56.362742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.535 [2024-07-12 10:20:56.362879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.535 [2024-07-12 10:20:56.378374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.535 [2024-07-12 10:20:56.378621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:02.536 [2024-07-12 10:20:56.378686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.536 [2024-07-12 10:20:56.378851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.536 [2024-07-12 10:20:56.393814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:02.536 [2024-07-12 10:20:56.394153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.536 [2024-07-12 10:20:56.408331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:02.536 [2024-07-12 10:20:56.408714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.536 [2024-07-12 10:20:56.423627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:02.536 [2024-07-12 10:20:56.424022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.536 passed 00:07:02.794 Test: blob_create_snapshot_power_failure ...[2024-07-12 10:20:56.468392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:02.794 [2024-07-12 10:20:56.482789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.794 [2024-07-12 10:20:56.510563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.794 [2024-07-12 10:20:56.524740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:02.794 passed 00:07:02.794 Test: blob_io_unit ...passed 00:07:02.794 Test: blob_io_unit_compatibility ...passed 00:07:02.794 Test: blob_ext_md_pages ...passed 00:07:02.794 Test: blob_esnap_io_4096_4096 ...passed 00:07:02.794 Test: blob_esnap_io_512_512 ...passed 00:07:02.794 Test: blob_esnap_io_4096_512 ...passed 00:07:03.052 Test: blob_esnap_io_512_4096 ...passed 00:07:03.052 Suite: blob_bs_nocopy_extent 00:07:03.052 Test: blob_open ...passed 00:07:03.052 Test: blob_create ...[2024-07-12 10:20:56.785923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:03.052 passed 00:07:03.052 Test: blob_create_loop ...passed 00:07:03.052 Test: blob_create_fail ...[2024-07-12 10:20:56.907100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.052 passed 00:07:03.052 Test: blob_create_internal ...passed 00:07:03.310 Test: blob_create_zero_extent ...passed 00:07:03.310 Test: blob_snapshot ...passed 00:07:03.310 Test: blob_clone ...passed 00:07:03.310 Test: blob_inflate ...[2024-07-12 10:20:57.097897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:03.310 passed 00:07:03.310 Test: blob_delete ...passed 00:07:03.310 Test: blob_resize_test ...[2024-07-12 10:20:57.171111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:03.310 passed 00:07:03.310 Test: channel_ops ...passed 00:07:03.568 Test: blob_super ...passed 00:07:03.568 Test: blob_rw_verify_iov ...passed 00:07:03.568 Test: blob_unmap ...passed 00:07:03.568 Test: blob_iter ...passed 00:07:03.568 Test: blob_parse_md ...passed 00:07:03.568 Test: bs_load_pending_removal ...passed 00:07:03.568 Test: bs_unload ...[2024-07-12 10:20:57.484325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:03.568 passed 00:07:03.827 Test: bs_usable_clusters ...passed 00:07:03.827 Test: blob_crc ...[2024-07-12 10:20:57.563179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.827 [2024-07-12 10:20:57.563634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.827 passed 00:07:03.827 Test: blob_flags ...passed 00:07:03.827 Test: bs_version ...passed 00:07:03.827 Test: blob_set_xattrs_test ...[2024-07-12 10:20:57.684160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.827 [2024-07-12 10:20:57.684548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.827 passed 00:07:04.086 Test: blob_thin_prov_alloc ...passed 00:07:04.086 Test: blob_insert_cluster_msg_test ...passed 00:07:04.086 Test: blob_thin_prov_rw ...passed 00:07:04.086 Test: blob_thin_prov_rle ...passed 00:07:04.086 Test: blob_thin_prov_rw_iov ...passed 00:07:04.344 Test: blob_snapshot_rw ...passed 00:07:04.344 Test: blob_snapshot_rw_iov ...passed 00:07:04.344 Test: blob_inflate_rw ...passed 00:07:04.602 Test: blob_snapshot_freeze_io ...passed 00:07:04.602 Test: blob_operation_split_rw ...passed 00:07:04.860 Test: blob_operation_split_rw_iov ...passed 00:07:04.860 Test: blob_simultaneous_operations ...[2024-07-12 10:20:58.594176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.860 [2024-07-12 10:20:58.594542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.860 [2024-07-12 10:20:58.595688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.860 [2024-07-12 10:20:58.595884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.860 [2024-07-12 10:20:58.605908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.860 [2024-07-12 10:20:58.606142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.860 [2024-07-12 10:20:58.606289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.860 [2024-07-12 10:20:58.606443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.860 passed 00:07:04.860 Test: blob_persist_test ...passed 00:07:04.860 Test: blob_decouple_snapshot ...passed 00:07:04.860 Test: blob_seek_io_unit ...passed 00:07:04.860 Test: blob_nested_freezes ...passed 00:07:04.860 Suite: blob_blob_nocopy_extent 00:07:05.118 Test: blob_write ...passed 00:07:05.118 Test: blob_read ...passed 00:07:05.118 Test: blob_rw_verify ...passed 00:07:05.118 Test: blob_rw_verify_iov_nomem ...passed 00:07:05.118 Test: blob_rw_iov_read_only ...passed 00:07:05.118 Test: blob_xattr ...passed 00:07:05.118 Test: blob_dirty_shutdown ...passed 00:07:05.118 Test: blob_is_degraded ...passed 00:07:05.118 Suite: blob_esnap_bs_nocopy_extent 00:07:05.376 Test: blob_esnap_create ...passed 00:07:05.376 Test: blob_esnap_thread_add_remove ...passed 00:07:05.376 Test: blob_esnap_clone_snapshot ...passed 00:07:05.376 Test: blob_esnap_clone_inflate ...passed 00:07:05.376 Test: blob_esnap_clone_decouple ...passed 00:07:05.376 Test: blob_esnap_clone_reload ...passed 00:07:05.376 Test: blob_esnap_hotplug ...passed 00:07:05.376 Suite: blob_copy_noextent 00:07:05.376 Test: blob_init ...[2024-07-12 10:20:59.269475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:05.376 passed 00:07:05.376 Test: blob_thin_provision ...passed 00:07:05.633 Test: blob_read_only ...passed 00:07:05.633 Test: bs_load ...[2024-07-12 10:20:59.319252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:05.633 passed 00:07:05.634 Test: bs_load_custom_cluster_size ...passed 00:07:05.634 Test: bs_load_after_failed_grow ...passed 00:07:05.634 Test: bs_cluster_sz ...[2024-07-12 10:20:59.348138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:05.634 [2024-07-12 10:20:59.348384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:05.634 [2024-07-12 10:20:59.348542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:05.634 passed 00:07:05.634 Test: bs_resize_md ...passed 00:07:05.634 Test: bs_destroy ...passed 00:07:05.634 Test: bs_type ...passed 00:07:05.634 Test: bs_super_block ...passed 00:07:05.634 Test: bs_test_recover_cluster_count ...passed 00:07:05.634 Test: bs_grow_live ...passed 00:07:05.634 Test: bs_grow_live_no_space ...passed 00:07:05.634 Test: bs_test_grow ...passed 00:07:05.634 Test: blob_serialize_test ...passed 00:07:05.634 Test: super_block_crc ...passed 00:07:05.634 Test: blob_thin_prov_write_count_io ...passed 00:07:05.634 Test: bs_load_iter_test ...passed 00:07:05.634 Test: blob_relations ...[2024-07-12 10:20:59.512850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.513218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 [2024-07-12 10:20:59.513870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.514023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 passed 00:07:05.634 Test: blob_relations2 ...[2024-07-12 10:20:59.527265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.527554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 [2024-07-12 10:20:59.527619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.527709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 [2024-07-12 10:20:59.528681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.528882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 [2024-07-12 10:20:59.529249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.634 [2024-07-12 10:20:59.529397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.634 passed 00:07:05.634 Test: blob_relations3 ...passed 00:07:05.891 Test: blobstore_clean_power_failure ...passed 00:07:05.891 Test: blob_delete_snapshot_power_failure ...[2024-07-12 10:20:59.689721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.891 [2024-07-12 10:20:59.702046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.891 [2024-07-12 10:20:59.702368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.891 [2024-07-12 10:20:59.702431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.891 [2024-07-12 10:20:59.714354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.891 [2024-07-12 10:20:59.714637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:05.891 [2024-07-12 10:20:59.714715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.891 [2024-07-12 10:20:59.714814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.891 [2024-07-12 10:20:59.726936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:05.891 [2024-07-12 10:20:59.727248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.891 [2024-07-12 10:20:59.739194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:05.892 [2024-07-12 10:20:59.739557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.892 [2024-07-12 10:20:59.752472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:05.892 [2024-07-12 10:20:59.752780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.892 passed 00:07:05.892 Test: blob_create_snapshot_power_failure ...[2024-07-12 10:20:59.791843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.892 [2024-07-12 10:20:59.816024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.149 [2024-07-12 10:20:59.828978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:06.149 passed 00:07:06.149 Test: blob_io_unit ...passed 00:07:06.149 Test: blob_io_unit_compatibility ...passed 00:07:06.149 Test: blob_ext_md_pages ...passed 00:07:06.149 Test: blob_esnap_io_4096_4096 ...passed 00:07:06.149 Test: blob_esnap_io_512_512 ...passed 00:07:06.149 Test: blob_esnap_io_4096_512 ...passed 00:07:06.149 Test: blob_esnap_io_512_4096 ...passed 00:07:06.149 Suite: blob_bs_copy_noextent 00:07:06.149 Test: blob_open ...passed 00:07:06.408 Test: blob_create ...[2024-07-12 10:21:00.089437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:06.408 passed 00:07:06.408 Test: blob_create_loop ...passed 00:07:06.408 Test: blob_create_fail ...[2024-07-12 10:21:00.186303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.408 passed 00:07:06.408 Test: blob_create_internal ...passed 00:07:06.408 Test: blob_create_zero_extent ...passed 00:07:06.408 Test: blob_snapshot ...passed 00:07:06.667 Test: blob_clone ...passed 00:07:06.667 Test: blob_inflate ...[2024-07-12 10:21:00.405371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:06.667 passed 00:07:06.667 Test: blob_delete ...passed 00:07:06.667 Test: blob_resize_test ...[2024-07-12 10:21:00.488161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:06.667 passed 00:07:06.667 Test: channel_ops ...passed 00:07:06.667 Test: blob_super ...passed 00:07:06.925 Test: blob_rw_verify_iov ...passed 00:07:06.925 Test: blob_unmap ...passed 00:07:06.925 Test: blob_iter ...passed 00:07:06.925 Test: blob_parse_md ...passed 00:07:06.925 Test: bs_load_pending_removal ...passed 00:07:06.925 Test: bs_unload ...[2024-07-12 10:21:00.823780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:06.925 passed 00:07:07.183 Test: bs_usable_clusters ...passed 00:07:07.183 Test: blob_crc ...[2024-07-12 10:21:00.908963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:07.183 [2024-07-12 10:21:00.909122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:07.183 passed 00:07:07.183 Test: blob_flags ...passed 00:07:07.183 Test: bs_version ...passed 00:07:07.183 Test: blob_set_xattrs_test ...[2024-07-12 10:21:01.035994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:07.183 [2024-07-12 10:21:01.036106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:07.183 passed 00:07:07.440 Test: blob_thin_prov_alloc ...passed 00:07:07.440 Test: blob_insert_cluster_msg_test ...passed 00:07:07.440 Test: blob_thin_prov_rw ...passed 00:07:07.440 Test: blob_thin_prov_rle ...passed 00:07:07.440 Test: blob_thin_prov_rw_iov ...passed 00:07:07.698 Test: blob_snapshot_rw ...passed 00:07:07.698 Test: blob_snapshot_rw_iov ...passed 00:07:07.954 Test: blob_inflate_rw ...passed 00:07:07.954 Test: blob_snapshot_freeze_io ...passed 00:07:07.954 Test: blob_operation_split_rw ...passed 00:07:08.211 Test: blob_operation_split_rw_iov ...passed 00:07:08.211 Test: blob_simultaneous_operations ...[2024-07-12 10:21:02.087877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.211 [2024-07-12 10:21:02.088015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.211 [2024-07-12 10:21:02.088582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.211 [2024-07-12 10:21:02.088626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.211 [2024-07-12 10:21:02.091686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.211 [2024-07-12 10:21:02.091740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.211 [2024-07-12 10:21:02.091847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.211 [2024-07-12 10:21:02.091873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.211 passed 00:07:08.468 Test: blob_persist_test ...passed 00:07:08.468 Test: blob_decouple_snapshot ...passed 00:07:08.468 Test: blob_seek_io_unit ...passed 00:07:08.468 Test: blob_nested_freezes ...passed 00:07:08.468 Suite: blob_blob_copy_noextent 00:07:08.468 Test: blob_write ...passed 00:07:08.468 Test: blob_read ...passed 00:07:08.468 Test: blob_rw_verify ...passed 00:07:08.726 Test: blob_rw_verify_iov_nomem ...passed 00:07:08.726 Test: blob_rw_iov_read_only ...passed 00:07:08.726 Test: blob_xattr ...passed 00:07:08.726 Test: blob_dirty_shutdown ...passed 00:07:08.726 Test: blob_is_degraded ...passed 00:07:08.726 Suite: blob_esnap_bs_copy_noextent 00:07:08.726 Test: blob_esnap_create ...passed 00:07:08.726 Test: blob_esnap_thread_add_remove ...passed 00:07:08.984 Test: blob_esnap_clone_snapshot ...passed 00:07:08.984 Test: blob_esnap_clone_inflate ...passed 00:07:08.984 Test: blob_esnap_clone_decouple ...passed 00:07:08.984 Test: blob_esnap_clone_reload ...passed 00:07:08.984 Test: blob_esnap_hotplug ...passed 00:07:08.984 Suite: blob_copy_extent 00:07:08.984 Test: blob_init ...[2024-07-12 10:21:02.803199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:08.984 passed 00:07:08.984 Test: blob_thin_provision ...passed 00:07:08.984 Test: blob_read_only ...passed 00:07:08.984 Test: bs_load ...[2024-07-12 10:21:02.854492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:08.984 passed 00:07:08.984 Test: bs_load_custom_cluster_size ...passed 00:07:08.984 Test: bs_load_after_failed_grow ...passed 00:07:08.984 Test: bs_cluster_sz ...[2024-07-12 10:21:02.882839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:08.984 [2024-07-12 10:21:02.883054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:08.984 [2024-07-12 10:21:02.883130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:08.984 passed 00:07:08.984 Test: bs_resize_md ...passed 00:07:09.254 Test: bs_destroy ...passed 00:07:09.254 Test: bs_type ...passed 00:07:09.254 Test: bs_super_block ...passed 00:07:09.254 Test: bs_test_recover_cluster_count ...passed 00:07:09.254 Test: bs_grow_live ...passed 00:07:09.254 Test: bs_grow_live_no_space ...passed 00:07:09.254 Test: bs_test_grow ...passed 00:07:09.254 Test: blob_serialize_test ...passed 00:07:09.254 Test: super_block_crc ...passed 00:07:09.254 Test: blob_thin_prov_write_count_io ...passed 00:07:09.254 Test: bs_load_iter_test ...passed 00:07:09.254 Test: blob_relations ...[2024-07-12 10:21:03.039258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.039434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 [2024-07-12 10:21:03.040502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.040592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 passed 00:07:09.254 Test: blob_relations2 ...[2024-07-12 10:21:03.058562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.058670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 [2024-07-12 10:21:03.058764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.058792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 [2024-07-12 10:21:03.060321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.060414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 [2024-07-12 10:21:03.060948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.254 [2024-07-12 10:21:03.061045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.254 passed 00:07:09.254 Test: blob_relations3 ...passed 00:07:09.542 Test: blobstore_clean_power_failure ...passed 00:07:09.542 Test: blob_delete_snapshot_power_failure ...[2024-07-12 10:21:03.240947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.542 [2024-07-12 10:21:03.254534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.542 [2024-07-12 10:21:03.268133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:09.542 [2024-07-12 10:21:03.268251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.542 [2024-07-12 10:21:03.268301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.542 [2024-07-12 10:21:03.284796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.542 [2024-07-12 10:21:03.284893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:09.542 [2024-07-12 10:21:03.284934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.542 [2024-07-12 10:21:03.284958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.542 [2024-07-12 10:21:03.298168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.542 [2024-07-12 10:21:03.298274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:09.542 [2024-07-12 10:21:03.298332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.542 [2024-07-12 10:21:03.298357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.542 [2024-07-12 10:21:03.312406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:09.542 [2024-07-12 10:21:03.312559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.542 [2024-07-12 10:21:03.326433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:09.543 [2024-07-12 10:21:03.326575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.543 [2024-07-12 10:21:03.340840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:09.543 [2024-07-12 10:21:03.340962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.543 passed 00:07:09.543 Test: blob_create_snapshot_power_failure ...[2024-07-12 10:21:03.380745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:09.543 [2024-07-12 10:21:03.392494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.543 [2024-07-12 10:21:03.415674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.543 [2024-07-12 10:21:03.427816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:09.543 passed 00:07:09.800 Test: blob_io_unit ...passed 00:07:09.800 Test: blob_io_unit_compatibility ...passed 00:07:09.800 Test: blob_ext_md_pages ...passed 00:07:09.800 Test: blob_esnap_io_4096_4096 ...passed 00:07:09.800 Test: blob_esnap_io_512_512 ...passed 00:07:09.800 Test: blob_esnap_io_4096_512 ...passed 00:07:09.800 Test: blob_esnap_io_512_4096 ...passed 00:07:09.800 Suite: blob_bs_copy_extent 00:07:09.800 Test: blob_open ...passed 00:07:09.800 Test: blob_create ...[2024-07-12 10:21:03.674519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:09.800 passed 00:07:10.058 Test: blob_create_loop ...passed 00:07:10.058 Test: blob_create_fail ...[2024-07-12 10:21:03.788037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:10.058 passed 00:07:10.058 Test: blob_create_internal ...passed 00:07:10.058 Test: blob_create_zero_extent ...passed 00:07:10.058 Test: blob_snapshot ...passed 00:07:10.058 Test: blob_clone ...passed 00:07:10.058 Test: blob_inflate ...[2024-07-12 10:21:03.984270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:10.316 passed 00:07:10.316 Test: blob_delete ...passed 00:07:10.316 Test: blob_resize_test ...[2024-07-12 10:21:04.057944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:10.316 passed 00:07:10.316 Test: channel_ops ...passed 00:07:10.316 Test: blob_super ...passed 00:07:10.316 Test: blob_rw_verify_iov ...passed 00:07:10.316 Test: blob_unmap ...passed 00:07:10.575 Test: blob_iter ...passed 00:07:10.575 Test: blob_parse_md ...passed 00:07:10.575 Test: bs_load_pending_removal ...passed 00:07:10.575 Test: bs_unload ...[2024-07-12 10:21:04.376439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:10.575 passed 00:07:10.575 Test: bs_usable_clusters ...passed 00:07:10.575 Test: blob_crc ...[2024-07-12 10:21:04.455613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:10.575 [2024-07-12 10:21:04.455800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:10.575 passed 00:07:10.833 Test: blob_flags ...passed 00:07:10.833 Test: bs_version ...passed 00:07:10.833 Test: blob_set_xattrs_test ...[2024-07-12 10:21:04.577482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:10.833 [2024-07-12 10:21:04.577657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:10.833 passed 00:07:10.833 Test: blob_thin_prov_alloc ...passed 00:07:11.092 Test: blob_insert_cluster_msg_test ...passed 00:07:11.092 Test: blob_thin_prov_rw ...passed 00:07:11.092 Test: blob_thin_prov_rle ...passed 00:07:11.092 Test: blob_thin_prov_rw_iov ...passed 00:07:11.092 Test: blob_snapshot_rw ...passed 00:07:11.092 Test: blob_snapshot_rw_iov ...passed 00:07:11.350 Test: blob_inflate_rw ...passed 00:07:11.350 Test: blob_snapshot_freeze_io ...passed 00:07:11.609 Test: blob_operation_split_rw ...passed 00:07:11.868 Test: blob_operation_split_rw_iov ...passed 00:07:11.868 Test: blob_simultaneous_operations ...[2024-07-12 10:21:05.611010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.868 [2024-07-12 10:21:05.611146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.868 [2024-07-12 10:21:05.611818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.868 [2024-07-12 10:21:05.611861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.868 [2024-07-12 10:21:05.615260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.868 [2024-07-12 10:21:05.615318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.868 [2024-07-12 10:21:05.615459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.868 [2024-07-12 10:21:05.615512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.868 passed 00:07:11.868 Test: blob_persist_test ...passed 00:07:11.868 Test: blob_decouple_snapshot ...passed 00:07:11.868 Test: blob_seek_io_unit ...passed 00:07:12.127 Test: blob_nested_freezes ...passed 00:07:12.127 Suite: blob_blob_copy_extent 00:07:12.127 Test: blob_write ...passed 00:07:12.127 Test: blob_read ...passed 00:07:12.127 Test: blob_rw_verify ...passed 00:07:12.127 Test: blob_rw_verify_iov_nomem ...passed 00:07:12.385 Test: blob_rw_iov_read_only ...passed 00:07:12.385 Test: blob_xattr ...passed 00:07:12.385 Test: blob_dirty_shutdown ...passed 00:07:12.385 Test: blob_is_degraded ...passed 00:07:12.385 Suite: blob_esnap_bs_copy_extent 00:07:12.385 Test: blob_esnap_create ...passed 00:07:12.385 Test: blob_esnap_thread_add_remove ...passed 00:07:12.643 Test: blob_esnap_clone_snapshot ...passed 00:07:12.643 Test: blob_esnap_clone_inflate ...passed 00:07:12.643 Test: blob_esnap_clone_decouple ...passed 00:07:12.643 Test: blob_esnap_clone_reload ...passed 00:07:12.643 Test: blob_esnap_hotplug ...passed 00:07:12.643 00:07:12.643 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.643 suites 16 16 n/a 0 0 00:07:12.643 tests 348 348 348 0 0 00:07:12.643 asserts 92605 92605 92605 0 n/a 00:07:12.643 00:07:12.643 Elapsed time = 14.043 seconds 00:07:12.901 10:21:06 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:12.901 00:07:12.901 00:07:12.901 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.901 http://cunit.sourceforge.net/ 00:07:12.901 00:07:12.901 00:07:12.901 Suite: blob_bdev 00:07:12.901 Test: create_bs_dev ...passed 00:07:12.901 Test: create_bs_dev_ro ...[2024-07-12 10:21:06.607805] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:12.901 passed 00:07:12.901 Test: create_bs_dev_rw ...passed 00:07:12.901 Test: claim_bs_dev ...[2024-07-12 10:21:06.608274] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:12.901 passed 00:07:12.901 Test: claim_bs_dev_ro ...passed 00:07:12.901 Test: deferred_destroy_refs ...passed 00:07:12.901 Test: deferred_destroy_channels ...passed 00:07:12.901 Test: deferred_destroy_threads ...passed 00:07:12.901 00:07:12.901 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.901 suites 1 1 n/a 0 0 00:07:12.901 tests 8 8 8 0 0 00:07:12.901 asserts 119 119 119 0 n/a 00:07:12.901 00:07:12.901 Elapsed time = 0.001 seconds 00:07:12.901 10:21:06 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:12.901 00:07:12.901 00:07:12.901 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.901 http://cunit.sourceforge.net/ 00:07:12.901 00:07:12.901 00:07:12.901 Suite: tree 00:07:12.901 Test: blobfs_tree_op_test ...passed 00:07:12.901 00:07:12.901 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.901 suites 1 1 n/a 0 0 00:07:12.901 tests 1 1 1 0 0 00:07:12.901 asserts 27 27 27 0 n/a 00:07:12.901 00:07:12.901 Elapsed time = 0.000 seconds 00:07:12.901 10:21:06 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:12.901 00:07:12.901 00:07:12.901 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.901 http://cunit.sourceforge.net/ 00:07:12.901 00:07:12.901 00:07:12.901 Suite: blobfs_async_ut 00:07:12.901 Test: fs_init ...passed 00:07:12.901 Test: fs_open ...passed 00:07:12.901 Test: fs_create ...passed 00:07:12.901 Test: fs_truncate ...passed 00:07:13.160 Test: fs_rename ...[2024-07-12 10:21:06.850620] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:13.160 passed 00:07:13.160 Test: fs_rw_async ...passed 00:07:13.160 Test: fs_writev_readv_async ...passed 00:07:13.160 Test: tree_find_buffer_ut ...passed 00:07:13.160 Test: channel_ops ...passed 00:07:13.160 Test: channel_ops_sync ...passed 00:07:13.160 00:07:13.160 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.160 suites 1 1 n/a 0 0 00:07:13.160 tests 10 10 10 0 0 00:07:13.160 asserts 292 292 292 0 n/a 00:07:13.160 00:07:13.160 Elapsed time = 0.266 seconds 00:07:13.160 10:21:06 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:13.160 00:07:13.160 00:07:13.160 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.160 http://cunit.sourceforge.net/ 00:07:13.160 00:07:13.160 00:07:13.160 Suite: blobfs_sync_ut 00:07:13.160 Test: cache_read_after_write ...[2024-07-12 10:21:07.084189] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:13.160 passed 00:07:13.418 Test: file_length ...passed 00:07:13.418 Test: append_write_to_extend_blob ...passed 00:07:13.418 Test: partial_buffer ...passed 00:07:13.418 Test: cache_write_null_buffer ...passed 00:07:13.418 Test: fs_create_sync ...passed 00:07:13.418 Test: fs_rename_sync ...passed 00:07:13.418 Test: cache_append_no_cache ...passed 00:07:13.418 Test: fs_delete_file_without_close ...passed 00:07:13.418 00:07:13.418 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.418 suites 1 1 n/a 0 0 00:07:13.418 tests 9 9 9 0 0 00:07:13.419 asserts 345 345 345 0 n/a 00:07:13.419 00:07:13.419 Elapsed time = 0.555 seconds 00:07:13.676 10:21:07 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:13.676 00:07:13.676 00:07:13.676 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.676 http://cunit.sourceforge.net/ 00:07:13.676 00:07:13.676 00:07:13.676 Suite: blobfs_bdev_ut 00:07:13.676 Test: spdk_blobfs_bdev_detect_test ...[2024-07-12 10:21:07.360391] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:13.676 passed 00:07:13.676 Test: spdk_blobfs_bdev_create_test ...[2024-07-12 10:21:07.360733] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:13.676 passed 00:07:13.676 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:13.676 00:07:13.676 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.676 suites 1 1 n/a 0 0 00:07:13.676 tests 3 3 3 0 0 00:07:13.676 asserts 9 9 9 0 n/a 00:07:13.676 00:07:13.676 Elapsed time = 0.000 seconds 00:07:13.676 ************************************ 00:07:13.676 END TEST unittest_blob_blobfs 00:07:13.676 ************************************ 00:07:13.676 00:07:13.676 real 0m15.030s 00:07:13.676 user 0m14.447s 00:07:13.676 sys 0m0.791s 00:07:13.676 10:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.676 10:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.676 10:21:07 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:13.676 10:21:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.676 10:21:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.676 10:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.676 ************************************ 00:07:13.676 START TEST unittest_event 00:07:13.676 ************************************ 00:07:13.676 10:21:07 -- common/autotest_common.sh@1104 -- # unittest_event 00:07:13.676 10:21:07 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:13.676 00:07:13.676 00:07:13.676 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.676 http://cunit.sourceforge.net/ 00:07:13.676 00:07:13.676 00:07:13.676 Suite: app_suite 00:07:13.676 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:13.676 options: 00:07:13.676 -c, --config JSON config file (default none) 00:07:13.676 --json JSON config file (default none) 00:07:13.676 --json-ignore-init-errors 00:07:13.676 don't exit on invalid config entry 00:07:13.676 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.676 -g, --single-file-segments 00:07:13.676 force creating just one hugetlbfs file 00:07:13.676 -h, --help show this usage 00:07:13.676 -i, --shm-id shared memory ID (optional) 00:07:13.676 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.676 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.676 [<,lcores[@CPUs]>...] 00:07:13.677 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.677 Within the group, '-' is used for range separator, 00:07:13.677 ',' is used for single number separator. 00:07:13.677 '( )' can be omitted for single element group, 00:07:13.677 '@' can be omitted if cpus and lcores have the same value 00:07:13.677 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.677 -p, --main-core main (primary) core for DPDK 00:07:13.677 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.677 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.677 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.677 --silence-noticelog disable notice level logging to stderr 00:07:13.677 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.677 -u, --no-pci disable PCI access 00:07:13.677 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.677 --max-delay maximum reactor delay (in microseconds) 00:07:13.677 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.677 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.677 -R, --huge-unlink unlink huge files after initialization 00:07:13.677 -v, --version print SPDK version 00:07:13.677 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.677 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.677 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.677 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.677 Tracepoints vary in size and can use more than one trace entry. 00:07:13.677 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.677 --env-context Opaque context for use of the env implementation 00:07:13.677 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.677 --no-huge run without using hugepages 00:07:13.677 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.677 -e, --tpoint-group [:] 00:07:13.677 app_ut: invalid option -- 'z' 00:07:13.677 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.677 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.677 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.677 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.677 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.677 app_ut [options] 00:07:13.677 options: 00:07:13.677 -c, --config JSON config file (default none) 00:07:13.677 --json JSON config file (default none) 00:07:13.677 --json-ignore-init-errors 00:07:13.677 don't exit on invalid config entry 00:07:13.677 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.677 -g, --single-file-segments 00:07:13.677 force creating just one hugetlbfs file 00:07:13.677 -h, --help show this usage 00:07:13.677 -i, --shm-id shared memory ID (optional) 00:07:13.677 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.677 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.677 [<,lcores[@CPUs]>...] 00:07:13.677 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.677 Within the group, '-' is used for range separator, 00:07:13.677 ',' is used for single number separator. 00:07:13.677 '( )' can be omitted for single element group, 00:07:13.677 '@' can be omitted if cpus and lcores have the same value 00:07:13.677 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.677 -p, --main-core main (primary) core for DPDK 00:07:13.677 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.677 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.677 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.677 --silence-noticelog disable notice level logging to stderr 00:07:13.677 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.677 -u, --no-pci disable PCI access 00:07:13.677 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.677 --max-delay maximum reactor delay (in microseconds) 00:07:13.677 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.677 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.677 -R, --huge-unlink unlink huge files after initialization 00:07:13.677 -v, --version print SPDK version 00:07:13.677 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.677 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.677 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.677 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.677 Tracepoints vary in size and can use more than one trace entry. 00:07:13.677 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.677 --env-context Opaque context for use of the env implementation 00:07:13.677 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.677 --no-huge run without using hugepages 00:07:13.677 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.677 -e, --tpoint-group [:] 00:07:13.677 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.677 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.677 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.677 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.677 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.677 app_ut: unrecognized option '--test-long-opt' 00:07:13.677 [2024-07-12 10:21:07.440145] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:13.677 app_ut [options] 00:07:13.677 options: 00:07:13.677 -c, --config JSON config file (default none) 00:07:13.677 --json JSON config file (default none) 00:07:13.677 --json-ignore-init-errors 00:07:13.677 don't exit on invalid config entry 00:07:13.677 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.677 -g, --single-file-segments 00:07:13.677 force creating just one hugetlbfs file 00:07:13.677 -h, --help show this usage 00:07:13.677 -i, --shm-id shared memory ID (optional) 00:07:13.677 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.677 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.677 [<,lcores[@CPUs]>...] 00:07:13.677 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.677 Within the group, '-' is used for range separator, 00:07:13.677 ',' is used for single number separator. 00:07:13.677 '( )' can be omitted for single element group, 00:07:13.677 '@' can be omitted if cpus and lcores have the same value 00:07:13.677 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.677 -p, --main-core main (primary) core for DPDK 00:07:13.677 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.677 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.677 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.677 --silence-noticelog disable notice level logging to stderr 00:07:13.677 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.677 -u, --no-pci disable PCI access 00:07:13.677 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.677 --max-delay maximum reactor delay (in microseconds) 00:07:13.677 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.677 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.677 -R, --huge-unlink unlink huge files after initialization 00:07:13.677 -v, --version print SPDK version 00:07:13.677 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.677 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.677 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.677 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.677 Tracepoints vary in size and can use more than one trace entry. 00:07:13.677 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.677 --env-context Opaque context for use of the env implementation 00:07:13.677 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.677 --no-huge run without using hugepages 00:07:13.677 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.677 -e, --tpoint-group [:] 00:07:13.677 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.677 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.677 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.677 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.677 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.677 [2024-07-12 10:21:07.440590] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:13.677 passed 00:07:13.677 00:07:13.677 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.677 suites 1 1 n/a 0 0 00:07:13.677 tests 1 1 1 0 0 00:07:13.677 asserts 8 8 8 0 n/a 00:07:13.677 00:07:13.677 Elapsed time = 0.002 seconds 00:07:13.677 [2024-07-12 10:21:07.440883] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:13.677 10:21:07 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:13.677 00:07:13.677 00:07:13.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.678 http://cunit.sourceforge.net/ 00:07:13.678 00:07:13.678 00:07:13.678 Suite: app_suite 00:07:13.678 Test: test_create_reactor ...passed 00:07:13.678 Test: test_init_reactors ...passed 00:07:13.678 Test: test_event_call ...passed 00:07:13.678 Test: test_schedule_thread ...passed 00:07:13.678 Test: test_reschedule_thread ...passed 00:07:13.678 Test: test_bind_thread ...passed 00:07:13.678 Test: test_for_each_reactor ...passed 00:07:13.678 Test: test_reactor_stats ...passed 00:07:13.678 Test: test_scheduler ...passed 00:07:13.678 Test: test_governor ...passed 00:07:13.678 00:07:13.678 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.678 suites 1 1 n/a 0 0 00:07:13.678 tests 10 10 10 0 0 00:07:13.678 asserts 344 344 344 0 n/a 00:07:13.678 00:07:13.678 Elapsed time = 0.021 seconds 00:07:13.678 00:07:13.678 real 0m0.098s 00:07:13.678 user 0m0.060s 00:07:13.678 sys 0m0.039s 00:07:13.678 ************************************ 00:07:13.678 END TEST unittest_event 00:07:13.678 ************************************ 00:07:13.678 10:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.678 10:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 10:21:07 -- unit/unittest.sh@233 -- # uname -s 00:07:13.678 10:21:07 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:13.678 10:21:07 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:13.678 10:21:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.678 10:21:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.678 10:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 ************************************ 00:07:13.678 START TEST unittest_ftl 00:07:13.678 ************************************ 00:07:13.678 10:21:07 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:07:13.678 10:21:07 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:13.678 00:07:13.678 00:07:13.678 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.678 http://cunit.sourceforge.net/ 00:07:13.678 00:07:13.678 00:07:13.678 Suite: ftl_band_suite 00:07:13.936 Test: test_band_block_offset_from_addr_base ...passed 00:07:13.936 Test: test_band_block_offset_from_addr_offset ...passed 00:07:13.936 Test: test_band_addr_from_block_offset ...passed 00:07:13.936 Test: test_band_set_addr ...passed 00:07:13.936 Test: test_invalidate_addr ...passed 00:07:13.936 Test: test_next_xfer_addr ...passed 00:07:13.936 00:07:13.936 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.936 suites 1 1 n/a 0 0 00:07:13.936 tests 6 6 6 0 0 00:07:13.936 asserts 30356 30356 30356 0 n/a 00:07:13.936 00:07:13.936 Elapsed time = 0.199 seconds 00:07:13.936 10:21:07 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:13.936 00:07:13.936 00:07:13.936 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.936 http://cunit.sourceforge.net/ 00:07:13.936 00:07:13.936 00:07:13.936 Suite: ftl_bitmap 00:07:13.936 Test: test_ftl_bitmap_create ...[2024-07-12 10:21:07.859733] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:13.936 passed 00:07:13.936 Test: test_ftl_bitmap_get ...passed 00:07:13.936 Test: test_ftl_bitmap_set ...[2024-07-12 10:21:07.860088] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:13.936 passed 00:07:13.936 Test: test_ftl_bitmap_clear ...passed 00:07:13.936 Test: test_ftl_bitmap_find_first_set ...passed 00:07:13.936 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:13.936 Test: test_ftl_bitmap_count_set ...passed 00:07:13.936 00:07:13.936 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.936 suites 1 1 n/a 0 0 00:07:13.936 tests 7 7 7 0 0 00:07:13.936 asserts 137 137 137 0 n/a 00:07:13.936 00:07:13.936 Elapsed time = 0.001 seconds 00:07:14.194 10:21:07 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:14.194 00:07:14.194 00:07:14.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.194 http://cunit.sourceforge.net/ 00:07:14.194 00:07:14.194 00:07:14.194 Suite: ftl_io_suite 00:07:14.194 Test: test_completion ...passed 00:07:14.194 Test: test_multiple_ios ...passed 00:07:14.194 00:07:14.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.194 suites 1 1 n/a 0 0 00:07:14.194 tests 2 2 2 0 0 00:07:14.194 asserts 47 47 47 0 n/a 00:07:14.194 00:07:14.194 Elapsed time = 0.004 seconds 00:07:14.194 10:21:07 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:14.194 00:07:14.194 00:07:14.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.194 http://cunit.sourceforge.net/ 00:07:14.194 00:07:14.194 00:07:14.194 Suite: ftl_mngt 00:07:14.194 Test: test_next_step ...passed 00:07:14.194 Test: test_continue_step ...passed 00:07:14.194 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:14.194 Test: test_fail_step ...passed 00:07:14.194 Test: test_mngt_call_and_call_rollback ...passed 00:07:14.194 Test: test_nested_process_failure ...passed 00:07:14.194 00:07:14.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.194 suites 1 1 n/a 0 0 00:07:14.194 tests 6 6 6 0 0 00:07:14.194 asserts 176 176 176 0 n/a 00:07:14.194 00:07:14.194 Elapsed time = 0.001 seconds 00:07:14.194 10:21:07 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:14.194 00:07:14.194 00:07:14.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.194 http://cunit.sourceforge.net/ 00:07:14.194 00:07:14.194 00:07:14.194 Suite: ftl_mempool 00:07:14.194 Test: test_ftl_mempool_create ...passed 00:07:14.194 Test: test_ftl_mempool_get_put ...passed 00:07:14.194 00:07:14.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.194 suites 1 1 n/a 0 0 00:07:14.194 tests 2 2 2 0 0 00:07:14.194 asserts 36 36 36 0 n/a 00:07:14.194 00:07:14.194 Elapsed time = 0.000 seconds 00:07:14.194 10:21:07 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:14.194 00:07:14.194 00:07:14.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.194 http://cunit.sourceforge.net/ 00:07:14.194 00:07:14.194 00:07:14.194 Suite: ftl_addr64_suite 00:07:14.194 Test: test_addr_cached ...passed 00:07:14.194 00:07:14.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.194 suites 1 1 n/a 0 0 00:07:14.194 tests 1 1 1 0 0 00:07:14.194 asserts 1536 1536 1536 0 n/a 00:07:14.194 00:07:14.194 Elapsed time = 0.000 seconds 00:07:14.194 10:21:07 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:14.194 00:07:14.194 00:07:14.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.194 http://cunit.sourceforge.net/ 00:07:14.194 00:07:14.194 00:07:14.194 Suite: ftl_sb 00:07:14.194 Test: test_sb_crc_v2 ...passed 00:07:14.194 Test: test_sb_crc_v3 ...passed 00:07:14.194 Test: test_sb_v3_md_layout ...[2024-07-12 10:21:07.996817] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:14.194 [2024-07-12 10:21:07.997190] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:14.194 [2024-07-12 10:21:07.997235] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:14.195 [2024-07-12 10:21:07.997269] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:14.195 [2024-07-12 10:21:07.997298] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:14.195 [2024-07-12 10:21:07.997419] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:14.195 [2024-07-12 10:21:07.997451] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:14.195 [2024-07-12 10:21:07.997506] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:14.195 passed 00:07:14.195 Test: test_sb_v5_md_layout ...[2024-07-12 10:21:07.997584] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:14.195 [2024-07-12 10:21:07.997624] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:14.195 [2024-07-12 10:21:07.997650] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:14.195 passed 00:07:14.195 00:07:14.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.195 suites 1 1 n/a 0 0 00:07:14.195 tests 4 4 4 0 0 00:07:14.195 asserts 148 148 148 0 n/a 00:07:14.195 00:07:14.195 Elapsed time = 0.002 seconds 00:07:14.195 10:21:08 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:14.195 00:07:14.195 00:07:14.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.195 http://cunit.sourceforge.net/ 00:07:14.195 00:07:14.195 00:07:14.195 Suite: ftl_layout_upgrade 00:07:14.195 Test: test_l2p_upgrade ...passed 00:07:14.195 00:07:14.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.195 suites 1 1 n/a 0 0 00:07:14.195 tests 1 1 1 0 0 00:07:14.195 asserts 140 140 140 0 n/a 00:07:14.195 00:07:14.195 Elapsed time = 0.001 seconds 00:07:14.195 00:07:14.195 real 0m0.468s 00:07:14.195 user 0m0.242s 00:07:14.195 sys 0m0.230s 00:07:14.195 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.195 ************************************ 00:07:14.195 END TEST unittest_ftl 00:07:14.195 ************************************ 00:07:14.195 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.195 10:21:08 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:14.195 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.195 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.195 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.195 ************************************ 00:07:14.195 START TEST unittest_accel 00:07:14.195 ************************************ 00:07:14.195 10:21:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:14.195 00:07:14.195 00:07:14.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.195 http://cunit.sourceforge.net/ 00:07:14.195 00:07:14.195 00:07:14.195 Suite: accel_sequence 00:07:14.195 Test: test_sequence_fill_copy ...passed 00:07:14.195 Test: test_sequence_abort ...passed 00:07:14.195 Test: test_sequence_append_error ...passed 00:07:14.195 Test: test_sequence_completion_error ...[2024-07-12 10:21:08.115564] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f7cd25287c0 00:07:14.195 [2024-07-12 10:21:08.115939] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f7cd25287c0 00:07:14.195 [2024-07-12 10:21:08.115991] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f7cd25287c0 00:07:14.195 [2024-07-12 10:21:08.116042] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f7cd25287c0 00:07:14.195 passed 00:07:14.195 Test: test_sequence_decompress ...passed 00:07:14.195 Test: test_sequence_reverse ...passed 00:07:14.195 Test: test_sequence_copy_elision ...passed 00:07:14.453 Test: test_sequence_accel_buffers ...passed 00:07:14.453 Test: test_sequence_memory_domain ...[2024-07-12 10:21:08.128188] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:14.453 passed 00:07:14.453 Test: test_sequence_module_memory_domain ...[2024-07-12 10:21:08.128399] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:14.453 passed 00:07:14.453 Test: test_sequence_crypto ...passed 00:07:14.453 Test: test_sequence_driver ...[2024-07-12 10:21:08.135493] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f7cd19007c0 using driver: ut 00:07:14.453 [2024-07-12 10:21:08.135611] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f7cd19007c0 through driver: ut 00:07:14.453 passed 00:07:14.453 Test: test_sequence_same_iovs ...passed 00:07:14.453 Test: test_sequence_crc32 ...passed 00:07:14.453 Suite: accel 00:07:14.453 Test: test_spdk_accel_task_complete ...passed 00:07:14.453 Test: test_get_task ...passed 00:07:14.453 Test: test_spdk_accel_submit_copy ...passed 00:07:14.453 Test: test_spdk_accel_submit_dualcast ...passed 00:07:14.453 Test: test_spdk_accel_submit_compare ...passed 00:07:14.453 Test: test_spdk_accel_submit_fill ...passed 00:07:14.453 Test: test_spdk_accel_submit_crc32c ...passed 00:07:14.453 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:14.453 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:14.453 Test: test_spdk_accel_submit_xor ...passed 00:07:14.453 Test: test_spdk_accel_module_find_by_name ...passed 00:07:14.453 Test: test_spdk_accel_module_register ...[2024-07-12 10:21:08.140824] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:14.453 [2024-07-12 10:21:08.140880] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:14.453 passed 00:07:14.453 00:07:14.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.453 suites 2 2 n/a 0 0 00:07:14.453 tests 26 26 26 0 0 00:07:14.453 asserts 831 831 831 0 n/a 00:07:14.453 00:07:14.453 Elapsed time = 0.037 seconds 00:07:14.453 00:07:14.453 real 0m0.077s 00:07:14.453 user 0m0.049s 00:07:14.453 sys 0m0.028s 00:07:14.453 ************************************ 00:07:14.453 END TEST unittest_accel 00:07:14.453 ************************************ 00:07:14.453 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.453 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.453 10:21:08 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:14.453 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.453 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.453 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.453 ************************************ 00:07:14.453 START TEST unittest_ioat 00:07:14.453 ************************************ 00:07:14.453 10:21:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:14.453 00:07:14.453 00:07:14.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.453 http://cunit.sourceforge.net/ 00:07:14.453 00:07:14.453 00:07:14.453 Suite: ioat 00:07:14.453 Test: ioat_state_check ...passed 00:07:14.453 00:07:14.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.453 suites 1 1 n/a 0 0 00:07:14.453 tests 1 1 1 0 0 00:07:14.453 asserts 32 32 32 0 n/a 00:07:14.453 00:07:14.453 Elapsed time = 0.000 seconds 00:07:14.453 00:07:14.453 real 0m0.028s 00:07:14.453 user 0m0.016s 00:07:14.453 sys 0m0.011s 00:07:14.453 ************************************ 00:07:14.453 END TEST unittest_ioat 00:07:14.453 ************************************ 00:07:14.453 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.453 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.453 10:21:08 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:14.453 10:21:08 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:14.453 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.453 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.453 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.453 ************************************ 00:07:14.453 START TEST unittest_idxd_user 00:07:14.453 ************************************ 00:07:14.453 10:21:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:14.453 00:07:14.453 00:07:14.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.453 http://cunit.sourceforge.net/ 00:07:14.453 00:07:14.453 00:07:14.453 Suite: idxd_user 00:07:14.453 Test: test_idxd_wait_cmd ...passed 00:07:14.453 Test: test_idxd_reset_dev ...passed 00:07:14.453 Test: test_idxd_group_config ...passed 00:07:14.453 Test: test_idxd_wq_config ...[2024-07-12 10:21:08.309929] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:14.454 [2024-07-12 10:21:08.310201] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:14.454 [2024-07-12 10:21:08.310336] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:14.454 [2024-07-12 10:21:08.310378] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:14.454 passed 00:07:14.454 00:07:14.454 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.454 suites 1 1 n/a 0 0 00:07:14.454 tests 4 4 4 0 0 00:07:14.454 asserts 20 20 20 0 n/a 00:07:14.454 00:07:14.454 Elapsed time = 0.001 seconds 00:07:14.454 00:07:14.454 real 0m0.030s 00:07:14.454 user 0m0.016s 00:07:14.454 sys 0m0.014s 00:07:14.454 ************************************ 00:07:14.454 END TEST unittest_idxd_user 00:07:14.454 ************************************ 00:07:14.454 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.454 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.454 10:21:08 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:14.454 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.454 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.454 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.454 ************************************ 00:07:14.454 START TEST unittest_iscsi 00:07:14.454 ************************************ 00:07:14.454 10:21:08 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:07:14.454 10:21:08 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:14.712 00:07:14.712 00:07:14.712 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.712 http://cunit.sourceforge.net/ 00:07:14.712 00:07:14.712 00:07:14.712 Suite: conn_suite 00:07:14.712 Test: read_task_split_in_order_case ...passed 00:07:14.712 Test: read_task_split_reverse_order_case ...passed 00:07:14.712 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:14.712 Test: process_non_read_task_completion_test ...passed 00:07:14.712 Test: free_tasks_on_connection ...passed 00:07:14.712 Test: free_tasks_with_queued_datain ...passed 00:07:14.712 Test: abort_queued_datain_task_test ...passed 00:07:14.712 Test: abort_queued_datain_tasks_test ...passed 00:07:14.712 00:07:14.712 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.712 suites 1 1 n/a 0 0 00:07:14.712 tests 8 8 8 0 0 00:07:14.712 asserts 230 230 230 0 n/a 00:07:14.712 00:07:14.712 Elapsed time = 0.000 seconds 00:07:14.713 10:21:08 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:14.713 00:07:14.713 00:07:14.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.713 http://cunit.sourceforge.net/ 00:07:14.713 00:07:14.713 00:07:14.713 Suite: iscsi_suite 00:07:14.713 Test: param_negotiation_test ...passed 00:07:14.713 Test: list_negotiation_test ...passed 00:07:14.713 Test: parse_valid_test ...passed 00:07:14.713 Test: parse_invalid_test ...[2024-07-12 10:21:08.426345] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:14.713 [2024-07-12 10:21:08.426647] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:14.713 [2024-07-12 10:21:08.426696] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:14.713 [2024-07-12 10:21:08.426756] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:14.713 [2024-07-12 10:21:08.426879] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:14.713 [2024-07-12 10:21:08.426932] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:14.713 [2024-07-12 10:21:08.427056] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:14.713 passed 00:07:14.713 00:07:14.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.713 suites 1 1 n/a 0 0 00:07:14.713 tests 4 4 4 0 0 00:07:14.713 asserts 161 161 161 0 n/a 00:07:14.713 00:07:14.713 Elapsed time = 0.005 seconds 00:07:14.713 10:21:08 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:14.713 00:07:14.713 00:07:14.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.713 http://cunit.sourceforge.net/ 00:07:14.713 00:07:14.713 00:07:14.713 Suite: iscsi_target_node_suite 00:07:14.713 Test: add_lun_test_cases ...passed 00:07:14.713 Test: allow_any_allowed ...passed 00:07:14.713 Test: allow_ipv6_allowed ...passed 00:07:14.713 Test: allow_ipv6_denied ...[2024-07-12 10:21:08.461828] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:14.713 [2024-07-12 10:21:08.462204] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:14.713 [2024-07-12 10:21:08.462295] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:14.713 [2024-07-12 10:21:08.462343] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:14.713 [2024-07-12 10:21:08.462366] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:14.713 passed 00:07:14.713 Test: allow_ipv6_invalid ...passed 00:07:14.713 Test: allow_ipv4_allowed ...passed 00:07:14.713 Test: allow_ipv4_denied ...passed 00:07:14.713 Test: allow_ipv4_invalid ...passed 00:07:14.713 Test: node_access_allowed ...passed 00:07:14.713 Test: node_access_denied_by_empty_netmask ...passed 00:07:14.713 Test: node_access_multi_initiator_groups_cases ...passed 00:07:14.713 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:14.713 Test: chap_param_test_cases ...passed 00:07:14.713 00:07:14.713 [2024-07-12 10:21:08.462777] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:14.713 [2024-07-12 10:21:08.462808] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:14.713 [2024-07-12 10:21:08.462853] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:14.713 [2024-07-12 10:21:08.462874] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:14.713 [2024-07-12 10:21:08.462901] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:14.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.713 suites 1 1 n/a 0 0 00:07:14.713 tests 13 13 13 0 0 00:07:14.713 asserts 50 50 50 0 n/a 00:07:14.713 00:07:14.713 Elapsed time = 0.001 seconds 00:07:14.713 10:21:08 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:14.713 00:07:14.713 00:07:14.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.713 http://cunit.sourceforge.net/ 00:07:14.713 00:07:14.713 00:07:14.713 Suite: iscsi_suite 00:07:14.713 Test: op_login_check_target_test ...passed 00:07:14.713 Test: op_login_session_normal_test ...[2024-07-12 10:21:08.495801] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:14.713 [2024-07-12 10:21:08.496158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.713 [2024-07-12 10:21:08.496202] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.713 [2024-07-12 10:21:08.496232] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.713 [2024-07-12 10:21:08.496280] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:14.713 passed 00:07:14.713 Test: maxburstlength_test ...[2024-07-12 10:21:08.496374] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:14.713 [2024-07-12 10:21:08.496470] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:14.713 [2024-07-12 10:21:08.496518] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:14.713 [2024-07-12 10:21:08.496732] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:14.713 [2024-07-12 10:21:08.496775] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:14.713 passed 00:07:14.713 Test: underflow_for_read_transfer_test ...passed 00:07:14.713 Test: underflow_for_zero_read_transfer_test ...passed 00:07:14.713 Test: underflow_for_request_sense_test ...passed 00:07:14.713 Test: underflow_for_check_condition_test ...passed 00:07:14.713 Test: add_transfer_task_test ...passed 00:07:14.713 Test: get_transfer_task_test ...passed 00:07:14.713 Test: del_transfer_task_test ...passed 00:07:14.713 Test: clear_all_transfer_tasks_test ...passed 00:07:14.713 Test: build_iovs_test ...passed 00:07:14.713 Test: build_iovs_with_md_test ...passed 00:07:14.713 Test: pdu_hdr_op_login_test ...[2024-07-12 10:21:08.498186] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:14.713 [2024-07-12 10:21:08.498305] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:14.713 passed 00:07:14.713 Test: pdu_hdr_op_text_test ...[2024-07-12 10:21:08.498380] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:14.713 [2024-07-12 10:21:08.498469] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:14.713 [2024-07-12 10:21:08.498549] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:14.713 [2024-07-12 10:21:08.498582] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:14.713 passed 00:07:14.713 Test: pdu_hdr_op_logout_test ...passed 00:07:14.713 Test: pdu_hdr_op_scsi_test ...[2024-07-12 10:21:08.498653] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:14.713 [2024-07-12 10:21:08.498782] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:14.713 [2024-07-12 10:21:08.498810] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:14.713 [2024-07-12 10:21:08.498850] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:14.713 [2024-07-12 10:21:08.498936] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:14.713 [2024-07-12 10:21:08.499017] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:14.713 [2024-07-12 10:21:08.499188] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:14.713 passed 00:07:14.713 Test: pdu_hdr_op_task_mgmt_test ...passed[2024-07-12 10:21:08.499284] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:14.713 [2024-07-12 10:21:08.499384] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:14.713 00:07:14.713 Test: pdu_hdr_op_nopout_test ...passed 00:07:14.713 Test: pdu_hdr_op_data_test ...[2024-07-12 10:21:08.499570] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:14.713 [2024-07-12 10:21:08.499669] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:14.713 [2024-07-12 10:21:08.499696] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:14.713 [2024-07-12 10:21:08.499734] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:14.713 [2024-07-12 10:21:08.499770] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:14.713 [2024-07-12 10:21:08.499827] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:14.713 [2024-07-12 10:21:08.499887] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:14.713 [2024-07-12 10:21:08.499930] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:14.713 [2024-07-12 10:21:08.499972] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:14.713 [2024-07-12 10:21:08.500047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:14.713 passed 00:07:14.713 Test: empty_text_with_cbit_test ...passed 00:07:14.713 Test: pdu_payload_read_test ...[2024-07-12 10:21:08.500078] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:14.713 [2024-07-12 10:21:08.502159] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:14.713 passed 00:07:14.713 Test: data_out_pdu_sequence_test ...passed 00:07:14.713 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:14.713 00:07:14.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.713 suites 1 1 n/a 0 0 00:07:14.713 tests 24 24 24 0 0 00:07:14.713 asserts 150253 150253 150253 0 n/a 00:07:14.713 00:07:14.714 Elapsed time = 0.016 seconds 00:07:14.714 10:21:08 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:14.714 00:07:14.714 00:07:14.714 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.714 http://cunit.sourceforge.net/ 00:07:14.714 00:07:14.714 00:07:14.714 Suite: init_grp_suite 00:07:14.714 Test: create_initiator_group_success_case ...passed 00:07:14.714 Test: find_initiator_group_success_case ...passed 00:07:14.714 Test: register_initiator_group_twice_case ...passed 00:07:14.714 Test: add_initiator_name_success_case ...passed 00:07:14.714 Test: add_initiator_name_fail_case ...passed 00:07:14.714 Test: delete_all_initiator_names_success_case ...passed 00:07:14.714 Test: add_netmask_success_case ...passed 00:07:14.714 Test: add_netmask_fail_case ...passed 00:07:14.714 Test: delete_all_netmasks_success_case ...passed 00:07:14.714 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:14.714 Test: netmask_overwrite_all_to_any_case ...passed 00:07:14.714 Test: add_delete_initiator_names_case ...passed 00:07:14.714 Test: add_duplicated_initiator_names_case ...passed 00:07:14.714 Test: delete_nonexisting_initiator_names_case ...passed 00:07:14.714 Test: add_delete_netmasks_case ...passed 00:07:14.714 Test: add_duplicated_netmasks_case ...passed 00:07:14.714 Test: delete_nonexisting_netmasks_case ...passed 00:07:14.714 00:07:14.714 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.714 suites 1 1 n/a 0 0 00:07:14.714 tests 17 17 17 0 0 00:07:14.714 asserts 108 108 108 0 n/a 00:07:14.714 00:07:14.714 Elapsed time = 0.001 seconds 00:07:14.714 [2024-07-12 10:21:08.543297] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:14.714 [2024-07-12 10:21:08.544236] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:14.714 10:21:08 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:14.714 00:07:14.714 00:07:14.714 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.714 http://cunit.sourceforge.net/ 00:07:14.714 00:07:14.714 00:07:14.714 Suite: portal_grp_suite 00:07:14.714 Test: portal_create_ipv4_normal_case ...passed 00:07:14.714 Test: portal_create_ipv6_normal_case ...passed 00:07:14.714 Test: portal_create_ipv4_wildcard_case ...passed 00:07:14.714 Test: portal_create_ipv6_wildcard_case ...passed 00:07:14.714 Test: portal_create_twice_case ...passed 00:07:14.714 Test: portal_grp_register_unregister_case ...passed 00:07:14.714 Test: portal_grp_register_twice_case ...passed 00:07:14.714 Test: portal_grp_add_delete_case ...[2024-07-12 10:21:08.573274] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:14.714 passed 00:07:14.714 Test: portal_grp_add_delete_twice_case ...passed 00:07:14.714 00:07:14.714 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.714 suites 1 1 n/a 0 0 00:07:14.714 tests 9 9 9 0 0 00:07:14.714 asserts 44 44 44 0 n/a 00:07:14.714 00:07:14.714 Elapsed time = 0.003 seconds 00:07:14.714 ************************************ 00:07:14.714 END TEST unittest_iscsi 00:07:14.714 ************************************ 00:07:14.714 00:07:14.714 real 0m0.219s 00:07:14.714 user 0m0.125s 00:07:14.714 sys 0m0.097s 00:07:14.714 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.714 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.714 10:21:08 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:14.714 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.714 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.714 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 START TEST unittest_json 00:07:14.972 ************************************ 00:07:14.972 10:21:08 -- common/autotest_common.sh@1104 -- # unittest_json 00:07:14.972 10:21:08 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:14.972 00:07:14.972 00:07:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.972 http://cunit.sourceforge.net/ 00:07:14.972 00:07:14.972 00:07:14.972 Suite: json 00:07:14.972 Test: test_parse_literal ...passed 00:07:14.972 Test: test_parse_string_simple ...passed 00:07:14.972 Test: test_parse_string_control_chars ...passed 00:07:14.972 Test: test_parse_string_utf8 ...passed 00:07:14.972 Test: test_parse_string_escapes_twochar ...passed 00:07:14.972 Test: test_parse_string_escapes_unicode ...passed 00:07:14.972 Test: test_parse_number ...passed 00:07:14.972 Test: test_parse_array ...passed 00:07:14.972 Test: test_parse_object ...passed 00:07:14.972 Test: test_parse_nesting ...passed 00:07:14.972 Test: test_parse_comment ...passed 00:07:14.972 00:07:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.972 suites 1 1 n/a 0 0 00:07:14.972 tests 11 11 11 0 0 00:07:14.972 asserts 1516 1516 1516 0 n/a 00:07:14.972 00:07:14.972 Elapsed time = 0.002 seconds 00:07:14.972 10:21:08 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:14.972 00:07:14.972 00:07:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.972 http://cunit.sourceforge.net/ 00:07:14.972 00:07:14.972 00:07:14.972 Suite: json 00:07:14.972 Test: test_strequal ...passed 00:07:14.972 Test: test_num_to_uint16 ...passed 00:07:14.972 Test: test_num_to_int32 ...passed 00:07:14.972 Test: test_num_to_uint64 ...passed 00:07:14.972 Test: test_decode_object ...passed 00:07:14.972 Test: test_decode_array ...passed 00:07:14.972 Test: test_decode_bool ...passed 00:07:14.972 Test: test_decode_uint16 ...passed 00:07:14.972 Test: test_decode_int32 ...passed 00:07:14.972 Test: test_decode_uint32 ...passed 00:07:14.972 Test: test_decode_uint64 ...passed 00:07:14.972 Test: test_decode_string ...passed 00:07:14.972 Test: test_decode_uuid ...passed 00:07:14.972 Test: test_find ...passed 00:07:14.972 Test: test_find_array ...passed 00:07:14.972 Test: test_iterating ...passed 00:07:14.972 Test: test_free_object ...passed 00:07:14.972 00:07:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.972 suites 1 1 n/a 0 0 00:07:14.972 tests 17 17 17 0 0 00:07:14.972 asserts 236 236 236 0 n/a 00:07:14.972 00:07:14.972 Elapsed time = 0.001 seconds 00:07:14.972 10:21:08 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:14.972 00:07:14.972 00:07:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.972 http://cunit.sourceforge.net/ 00:07:14.972 00:07:14.972 00:07:14.972 Suite: json 00:07:14.972 Test: test_write_literal ...passed 00:07:14.972 Test: test_write_string_simple ...passed 00:07:14.972 Test: test_write_string_escapes ...passed 00:07:14.972 Test: test_write_string_utf16le ...passed 00:07:14.972 Test: test_write_number_int32 ...passed 00:07:14.972 Test: test_write_number_uint32 ...passed 00:07:14.972 Test: test_write_number_uint128 ...passed 00:07:14.972 Test: test_write_string_number_uint128 ...passed 00:07:14.972 Test: test_write_number_int64 ...passed 00:07:14.972 Test: test_write_number_uint64 ...passed 00:07:14.972 Test: test_write_number_double ...passed 00:07:14.972 Test: test_write_uuid ...passed 00:07:14.972 Test: test_write_array ...passed 00:07:14.972 Test: test_write_object ...passed 00:07:14.972 Test: test_write_nesting ...passed 00:07:14.972 Test: test_write_val ...passed 00:07:14.972 00:07:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.972 suites 1 1 n/a 0 0 00:07:14.972 tests 16 16 16 0 0 00:07:14.972 asserts 918 918 918 0 n/a 00:07:14.972 00:07:14.972 Elapsed time = 0.005 seconds 00:07:14.972 10:21:08 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:14.972 00:07:14.972 00:07:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.972 http://cunit.sourceforge.net/ 00:07:14.972 00:07:14.972 00:07:14.972 Suite: jsonrpc 00:07:14.972 Test: test_parse_request ...passed 00:07:14.972 Test: test_parse_request_streaming ...passed 00:07:14.972 00:07:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.972 suites 1 1 n/a 0 0 00:07:14.972 tests 2 2 2 0 0 00:07:14.972 asserts 289 289 289 0 n/a 00:07:14.972 00:07:14.972 Elapsed time = 0.004 seconds 00:07:14.972 00:07:14.972 real 0m0.140s 00:07:14.972 user 0m0.071s 00:07:14.972 sys 0m0.063s 00:07:14.972 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.972 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 END TEST unittest_json 00:07:14.972 ************************************ 00:07:14.972 10:21:08 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:14.972 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.972 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.972 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 START TEST unittest_rpc 00:07:14.972 ************************************ 00:07:14.972 10:21:08 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:07:14.972 10:21:08 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:14.972 00:07:14.972 00:07:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.972 http://cunit.sourceforge.net/ 00:07:14.972 00:07:14.972 00:07:14.972 Suite: rpc 00:07:14.972 Test: test_jsonrpc_handler ...passed 00:07:14.972 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:14.972 Test: test_rpc_get_methods ...[2024-07-12 10:21:08.848376] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:14.972 passed 00:07:14.972 Test: test_rpc_spdk_get_version ...passed 00:07:14.972 Test: test_spdk_rpc_listen_close ...passed 00:07:14.972 00:07:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.972 suites 1 1 n/a 0 0 00:07:14.972 tests 5 5 5 0 0 00:07:14.972 asserts 20 20 20 0 n/a 00:07:14.972 00:07:14.972 Elapsed time = 0.001 seconds 00:07:14.972 00:07:14.972 real 0m0.032s 00:07:14.972 user 0m0.022s 00:07:14.972 sys 0m0.009s 00:07:14.972 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.972 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:14.972 ************************************ 00:07:14.972 END TEST unittest_rpc 00:07:14.972 ************************************ 00:07:14.972 10:21:08 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:14.972 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.972 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.972 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.231 ************************************ 00:07:15.231 START TEST unittest_notify 00:07:15.231 ************************************ 00:07:15.231 10:21:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:15.231 00:07:15.231 00:07:15.231 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.231 http://cunit.sourceforge.net/ 00:07:15.231 00:07:15.231 00:07:15.231 Suite: app_suite 00:07:15.231 Test: notify ...passed 00:07:15.231 00:07:15.231 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.231 suites 1 1 n/a 0 0 00:07:15.231 tests 1 1 1 0 0 00:07:15.231 asserts 13 13 13 0 n/a 00:07:15.231 00:07:15.231 Elapsed time = 0.000 seconds 00:07:15.231 00:07:15.231 real 0m0.033s 00:07:15.231 user 0m0.028s 00:07:15.231 sys 0m0.005s 00:07:15.231 10:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.231 ************************************ 00:07:15.231 END TEST unittest_notify 00:07:15.231 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.231 ************************************ 00:07:15.231 10:21:08 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:15.231 10:21:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.231 10:21:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.231 10:21:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.231 ************************************ 00:07:15.231 START TEST unittest_nvme 00:07:15.231 ************************************ 00:07:15.231 10:21:08 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:07:15.231 10:21:08 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:15.231 00:07:15.231 00:07:15.231 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.231 http://cunit.sourceforge.net/ 00:07:15.231 00:07:15.231 00:07:15.231 Suite: nvme 00:07:15.231 Test: test_opc_data_transfer ...passed 00:07:15.231 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:15.231 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:15.231 Test: test_trid_parse_and_compare ...[2024-07-12 10:21:09.004216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:15.231 [2024-07-12 10:21:09.004714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:15.231 [2024-07-12 10:21:09.004922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:15.231 [2024-07-12 10:21:09.005065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:15.231 [2024-07-12 10:21:09.005168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:15.231 [2024-07-12 10:21:09.005315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:15.231 passed 00:07:15.231 Test: test_trid_trtype_str ...passed 00:07:15.231 Test: test_trid_adrfam_str ...passed 00:07:15.231 Test: test_nvme_ctrlr_probe ...[2024-07-12 10:21:09.005982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:15.231 passed 00:07:15.231 Test: test_spdk_nvme_probe ...[2024-07-12 10:21:09.006247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:15.231 [2024-07-12 10:21:09.006389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:15.231 [2024-07-12 10:21:09.006593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:15.231 [2024-07-12 10:21:09.006780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:15.231 passed 00:07:15.231 Test: test_spdk_nvme_connect ...[2024-07-12 10:21:09.006960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:15.231 [2024-07-12 10:21:09.007454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:15.231 [2024-07-12 10:21:09.007671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:15.231 passed 00:07:15.231 Test: test_nvme_ctrlr_probe_internal ...[2024-07-12 10:21:09.008115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:15.231 [2024-07-12 10:21:09.008275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:15.231 passed 00:07:15.231 Test: test_nvme_init_controllers ...[2024-07-12 10:21:09.008733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:15.231 passed 00:07:15.231 Test: test_nvme_driver_init ...[2024-07-12 10:21:09.009121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:15.231 [2024-07-12 10:21:09.009293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:15.231 [2024-07-12 10:21:09.122957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:15.231 [2024-07-12 10:21:09.123516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:15.231 passed 00:07:15.231 Test: test_spdk_nvme_detach ...passed 00:07:15.231 Test: test_nvme_completion_poll_cb ...passed 00:07:15.231 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:15.231 Test: test_nvme_allocate_request_null ...passed 00:07:15.231 Test: test_nvme_allocate_request ...passed 00:07:15.231 Test: test_nvme_free_request ...passed 00:07:15.231 Test: test_nvme_allocate_request_user_copy ...passed 00:07:15.231 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:15.231 Test: test_nvme_request_check_timeout ...passed 00:07:15.231 Test: test_nvme_wait_for_completion ...passed 00:07:15.231 Test: test_spdk_nvme_parse_func ...passed 00:07:15.231 Test: test_spdk_nvme_detach_async ...passed 00:07:15.231 Test: test_nvme_parse_addr ...[2024-07-12 10:21:09.127763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:15.231 passed 00:07:15.231 00:07:15.231 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.231 suites 1 1 n/a 0 0 00:07:15.231 tests 25 25 25 0 0 00:07:15.231 asserts 326 326 326 0 n/a 00:07:15.231 00:07:15.231 Elapsed time = 0.008 seconds 00:07:15.231 10:21:09 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:15.231 00:07:15.231 00:07:15.231 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.231 http://cunit.sourceforge.net/ 00:07:15.231 00:07:15.231 00:07:15.231 Suite: nvme_ctrlr 00:07:15.231 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-12 10:21:09.159214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-12 10:21:09.161004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-12 10:21:09.162381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-12 10:21:09.163641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-12 10:21:09.165013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 [2024-07-12 10:21:09.166274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 10:21:09.167480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 10:21:09.168704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-12 10:21:09.171184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 [2024-07-12 10:21:09.173567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 10:21:09.174829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:15.491 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-12 10:21:09.177305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 [2024-07-12 10:21:09.178554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 10:21:09.180918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:15.491 Test: test_nvme_ctrlr_init_delay ...[2024-07-12 10:21:09.183401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_alloc_io_qpair_rr_1 ...[2024-07-12 10:21:09.184755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 [2024-07-12 10:21:09.184903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:15.491 [2024-07-12 10:21:09.185125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:15.491 passed 00:07:15.491 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:15.491 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:15.491 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-12 10:21:09.185256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:15.491 [2024-07-12 10:21:09.185303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:15.491 [2024-07-12 10:21:09.185477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 passed 00:07:15.491 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-12 10:21:09.185673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.491 [2024-07-12 10:21:09.185801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:15.491 passed 00:07:15.491 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-12 10:21:09.186079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:15.491 [2024-07-12 10:21:09.186236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:15.491 [2024-07-12 10:21:09.186334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_fail ...[2024-07-12 10:21:09.186402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:15.491 [2024-07-12 10:21:09.186465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:15.491 passed 00:07:15.491 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:15.491 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:15.491 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:15.491 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-12 10:21:09.186759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:15.750 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:15.750 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:15.750 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-12 10:21:09.516086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-12 10:21:09.523536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-12 10:21:09.524848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 [2024-07-12 10:21:09.524941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:15.750 passed 00:07:15.750 Test: test_alloc_io_qpair_fail ...[2024-07-12 10:21:09.526145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:15.750 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-07-12 10:21:09.526290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_set_state ...passed 00:07:15.750 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-12 10:21:09.526419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:15.750 [2024-07-12 10:21:09.526454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-12 10:21:09.550098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-12 10:21:09.602317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_reset ...[2024-07-12 10:21:09.603999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_aer_callback ...[2024-07-12 10:21:09.604463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-12 10:21:09.606056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:15.750 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:15.750 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-12 10:21:09.608117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.750 passed 00:07:15.750 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:15.751 Test: test_nvme_ctrlr_ana_resize ...[2024-07-12 10:21:09.609556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.751 passed 00:07:15.751 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:15.751 Test: test_nvme_transport_ctrlr_ready ...[2024-07-12 10:21:09.611285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:15.751 passed 00:07:15.751 Test: test_nvme_ctrlr_disable ...[2024-07-12 10:21:09.611408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:15.751 [2024-07-12 10:21:09.611509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.751 passed 00:07:15.751 00:07:15.751 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.751 suites 1 1 n/a 0 0 00:07:15.751 tests 43 43 43 0 0 00:07:15.751 asserts 10418 10418 10418 0 n/a 00:07:15.751 00:07:15.751 Elapsed time = 0.406 seconds 00:07:15.751 10:21:09 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:15.751 00:07:15.751 00:07:15.751 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.751 http://cunit.sourceforge.net/ 00:07:15.751 00:07:15.751 00:07:15.751 Suite: nvme_ctrlr_cmd 00:07:15.751 Test: test_get_log_pages ...passed 00:07:15.751 Test: test_set_feature_cmd ...passed 00:07:15.751 Test: test_set_feature_ns_cmd ...passed 00:07:15.751 Test: test_get_feature_cmd ...passed 00:07:15.751 Test: test_get_feature_ns_cmd ...passed 00:07:15.751 Test: test_abort_cmd ...passed 00:07:15.751 Test: test_set_host_id_cmds ...[2024-07-12 10:21:09.655806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:15.751 passed 00:07:15.751 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:15.751 Test: test_io_raw_cmd ...passed 00:07:15.751 Test: test_io_raw_cmd_with_md ...passed 00:07:15.751 Test: test_namespace_attach ...passed 00:07:15.751 Test: test_namespace_detach ...passed 00:07:15.751 Test: test_namespace_create ...passed 00:07:15.751 Test: test_namespace_delete ...passed 00:07:15.751 Test: test_doorbell_buffer_config ...passed 00:07:15.751 Test: test_format_nvme ...passed 00:07:15.751 Test: test_fw_commit ...passed 00:07:15.751 Test: test_fw_image_download ...passed 00:07:15.751 Test: test_sanitize ...passed 00:07:15.751 Test: test_directive ...passed 00:07:15.751 Test: test_nvme_request_add_abort ...passed 00:07:15.751 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:15.751 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:15.751 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:15.751 00:07:15.751 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.751 suites 1 1 n/a 0 0 00:07:15.751 tests 24 24 24 0 0 00:07:15.751 asserts 198 198 198 0 n/a 00:07:15.751 00:07:15.751 Elapsed time = 0.001 seconds 00:07:15.751 10:21:09 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:16.010 00:07:16.010 00:07:16.010 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.010 http://cunit.sourceforge.net/ 00:07:16.010 00:07:16.010 00:07:16.010 Suite: nvme_ctrlr_cmd 00:07:16.010 Test: test_geometry_cmd ...passed 00:07:16.010 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:16.010 00:07:16.010 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.010 suites 1 1 n/a 0 0 00:07:16.010 tests 2 2 2 0 0 00:07:16.010 asserts 7 7 7 0 n/a 00:07:16.010 00:07:16.010 Elapsed time = 0.000 seconds 00:07:16.010 10:21:09 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:16.010 00:07:16.010 00:07:16.010 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.010 http://cunit.sourceforge.net/ 00:07:16.010 00:07:16.010 00:07:16.010 Suite: nvme 00:07:16.010 Test: test_nvme_ns_construct ...passed 00:07:16.010 Test: test_nvme_ns_uuid ...passed 00:07:16.010 Test: test_nvme_ns_csi ...passed 00:07:16.010 Test: test_nvme_ns_data ...passed 00:07:16.010 Test: test_nvme_ns_set_identify_data ...passed 00:07:16.010 Test: test_spdk_nvme_ns_get_values ...passed 00:07:16.010 Test: test_spdk_nvme_ns_is_active ...passed 00:07:16.010 Test: spdk_nvme_ns_supports ...passed 00:07:16.010 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:16.010 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:16.010 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:16.010 Test: test_nvme_ns_find_id_desc ...passed 00:07:16.010 00:07:16.010 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.010 suites 1 1 n/a 0 0 00:07:16.010 tests 12 12 12 0 0 00:07:16.010 asserts 83 83 83 0 n/a 00:07:16.010 00:07:16.010 Elapsed time = 0.001 seconds 00:07:16.010 10:21:09 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:16.010 00:07:16.010 00:07:16.010 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.010 http://cunit.sourceforge.net/ 00:07:16.010 00:07:16.010 00:07:16.010 Suite: nvme_ns_cmd 00:07:16.010 Test: split_test ...passed 00:07:16.010 Test: split_test2 ...passed 00:07:16.010 Test: split_test3 ...passed 00:07:16.010 Test: split_test4 ...passed 00:07:16.010 Test: test_nvme_ns_cmd_flush ...passed 00:07:16.010 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:16.010 Test: test_nvme_ns_cmd_copy ...passed 00:07:16.010 Test: test_io_flags ...[2024-07-12 10:21:09.745828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:16.010 passed 00:07:16.010 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:16.010 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:16.010 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:16.010 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:16.010 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:16.010 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:16.010 Test: test_cmd_child_request ...passed 00:07:16.010 Test: test_nvme_ns_cmd_readv ...passed 00:07:16.010 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_writev ...[2024-07-12 10:21:09.746976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:16.010 passed 00:07:16.010 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_comparev ...passed 00:07:16.010 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:16.010 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:16.010 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:16.010 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:16.011 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-12 10:21:09.748718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:16.011 passed 00:07:16.011 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:16.011 Test: test_nvme_ns_cmd_verify ...[2024-07-12 10:21:09.748819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:16.011 passed 00:07:16.011 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:16.011 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 32 32 32 0 0 00:07:16.011 asserts 550 550 550 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.004 seconds 00:07:16.011 10:21:09 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_ns_cmd 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:16.011 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 12 12 12 0 0 00:07:16.011 asserts 123 123 123 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.001 seconds 00:07:16.011 10:21:09 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_qpair 00:07:16.011 Test: test3 ...passed 00:07:16.011 Test: test_ctrlr_failed ...passed 00:07:16.011 Test: struct_packing ...passed 00:07:16.011 Test: test_nvme_qpair_process_completions ...[2024-07-12 10:21:09.814776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:16.011 [2024-07-12 10:21:09.815081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:16.011 passed 00:07:16.011 Test: test_nvme_completion_is_retry ...passed 00:07:16.011 Test: test_get_status_string ...passed 00:07:16.011 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-07-12 10:21:09.815165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:16.011 [2024-07-12 10:21:09.815250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:16.011 passed 00:07:16.011 Test: test_nvme_qpair_submit_request ...passed 00:07:16.011 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:16.011 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:16.011 Test: test_nvme_qpair_init_deinit ...[2024-07-12 10:21:09.815710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:16.011 passed 00:07:16.011 Test: test_nvme_get_sgl_print_info ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 12 12 12 0 0 00:07:16.011 asserts 154 154 154 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.001 seconds 00:07:16.011 10:21:09 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_pcie 00:07:16.011 Test: test_prp_list_append ...[2024-07-12 10:21:09.843631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:16.011 [2024-07-12 10:21:09.843928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:16.011 [2024-07-12 10:21:09.843981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:16.011 [2024-07-12 10:21:09.844225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:16.011 passed 00:07:16.011 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-12 10:21:09.844314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:16.011 passed 00:07:16.011 Test: test_shadow_doorbell_update ...passed 00:07:16.011 Test: test_build_contig_hw_sgl_request ...passed 00:07:16.011 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:16.011 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:16.011 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:16.011 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-12 10:21:09.844491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:16.011 [2024-07-12 10:21:09.844577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:16.011 passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-12 10:21:09.844638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:16.011 [2024-07-12 10:21:09.844670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:16.011 passed 00:07:16.011 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 14 14 14 0 0 00:07:16.011 asserts 235 235 235 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.001 seconds 00:07:16.011 [2024-07-12 10:21:09.844704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:16.011 10:21:09 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_ns_cmd 00:07:16.011 Test: nvme_poll_group_create_test ...passed 00:07:16.011 Test: nvme_poll_group_add_remove_test ...passed 00:07:16.011 Test: nvme_poll_group_process_completions ...passed 00:07:16.011 Test: nvme_poll_group_destroy_test ...passed 00:07:16.011 Test: nvme_poll_group_get_free_stats ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 5 5 5 0 0 00:07:16.011 asserts 75 75 75 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.000 seconds 00:07:16.011 10:21:09 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_quirks 00:07:16.011 Test: test_nvme_quirks_striping ...passed 00:07:16.011 00:07:16.011 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.011 suites 1 1 n/a 0 0 00:07:16.011 tests 1 1 1 0 0 00:07:16.011 asserts 5 5 5 0 n/a 00:07:16.011 00:07:16.011 Elapsed time = 0.000 seconds 00:07:16.011 10:21:09 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:16.011 00:07:16.011 00:07:16.011 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.011 http://cunit.sourceforge.net/ 00:07:16.011 00:07:16.011 00:07:16.011 Suite: nvme_tcp 00:07:16.011 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:16.011 Test: test_nvme_tcp_build_iovs ...passed 00:07:16.011 Test: test_nvme_tcp_build_sgl_request ...[2024-07-12 10:21:09.933194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fffe05e4780, and the iovcnt=16, remaining_size=28672 00:07:16.011 passed 00:07:16.011 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:16.011 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:16.011 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:16.011 Test: test_nvme_tcp_req_get ...passed 00:07:16.011 Test: test_nvme_tcp_req_init ...passed 00:07:16.011 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:16.011 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:16.011 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:16.011 Test: test_nvme_tcp_alloc_reqs ...[2024-07-12 10:21:09.933963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e64a0 is same with the state(6) to be set 00:07:16.011 passed 00:07:16.011 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:16.011 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-12 10:21:09.934334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5630 is same with the state(5) to be set 00:07:16.011 [2024-07-12 10:21:09.934410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fffe05e6160 00:07:16.011 [2024-07-12 10:21:09.934461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:16.012 [2024-07-12 10:21:09.934552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.934663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:16.012 [2024-07-12 10:21:09.934763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.934817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:16.012 [2024-07-12 10:21:09.934855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.934904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.934945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.935016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.935059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-12 10:21:09.935113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5af0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.935324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:16.012 [2024-07-12 10:21:09.935412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:16.012 [2024-07-12 10:21:09.935701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:16.012 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-12 10:21:09.935841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fffe05e5ca0): PDU Sequence Error 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_icresp_handle ...[2024-07-12 10:21:09.935986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:16.012 [2024-07-12 10:21:09.936026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:16.012 [2024-07-12 10:21:09.936065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5640 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.936104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:16.012 [2024-07-12 10:21:09.936143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5640 is same with the state(5) to be set 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:16.012 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-12 10:21:09.936202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e5640 is same with the state(0) to be set 00:07:16.012 [2024-07-12 10:21:09.936288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fffe05e6160): PDU Sequence Error 00:07:16.012 [2024-07-12 10:21:09.936380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fffe05e4920 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:16.012 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-12 10:21:09.936584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fffe05e3fa0, errno=0, rc=0 00:07:16.012 [2024-07-12 10:21:09.936649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e3fa0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.936731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe05e3fa0 is same with the state(5) to be set 00:07:16.012 [2024-07-12 10:21:09.936791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fffe05e3fa0 (0): Success 00:07:16.012 passed 00:07:16.012 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-12 10:21:09.936845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fffe05e3fa0 (0): Success 00:07:16.270 [2024-07-12 10:21:10.048281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:16.270 [2024-07-12 10:21:10.048383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:16.270 passed 00:07:16.270 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:16.270 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:07:16.270 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-12 10:21:10.048589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.270 [2024-07-12 10:21:10.048621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.270 [2024-07-12 10:21:10.048808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:16.270 [2024-07-12 10:21:10.048856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:16.270 [2024-07-12 10:21:10.048949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:16.270 [2024-07-12 10:21:10.049004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:16.270 [2024-07-12 10:21:10.049140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:16.270 passed 00:07:16.270 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-12 10:21:10.049205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:16.270 [2024-07-12 10:21:10.049337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:16.270 [2024-07-12 10:21:10.049374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:16.270 passed 00:07:16.270 00:07:16.270 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.270 suites 1 1 n/a 0 0 00:07:16.270 tests 27 27 27 0 0 00:07:16.270 asserts 624 624 624 0 n/a 00:07:16.270 00:07:16.270 Elapsed time = 0.116 seconds 00:07:16.270 10:21:10 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:16.270 00:07:16.270 00:07:16.270 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.270 http://cunit.sourceforge.net/ 00:07:16.270 00:07:16.270 00:07:16.270 Suite: nvme_transport 00:07:16.270 Test: test_nvme_get_transport ...passed 00:07:16.270 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:16.270 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:16.270 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:16.270 Test: test_ctrlr_get_memory_domains ...passed 00:07:16.270 00:07:16.270 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.270 suites 1 1 n/a 0 0 00:07:16.270 tests 5 5 5 0 0 00:07:16.270 asserts 28 28 28 0 n/a 00:07:16.270 00:07:16.270 Elapsed time = 0.000 seconds 00:07:16.270 10:21:10 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:16.270 00:07:16.270 00:07:16.270 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.270 http://cunit.sourceforge.net/ 00:07:16.270 00:07:16.270 00:07:16.270 Suite: nvme_io_msg 00:07:16.270 Test: test_nvme_io_msg_send ...passed 00:07:16.270 Test: test_nvme_io_msg_process ...passed 00:07:16.270 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:16.270 00:07:16.270 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.270 suites 1 1 n/a 0 0 00:07:16.270 tests 3 3 3 0 0 00:07:16.270 asserts 56 56 56 0 n/a 00:07:16.270 00:07:16.270 Elapsed time = 0.000 seconds 00:07:16.270 10:21:10 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:16.270 00:07:16.270 00:07:16.270 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.270 http://cunit.sourceforge.net/ 00:07:16.270 00:07:16.270 00:07:16.270 Suite: nvme_pcie_common 00:07:16.270 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-12 10:21:10.149966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:16.270 passed 00:07:16.270 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:16.270 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:16.270 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-12 10:21:10.150695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:16.270 passed 00:07:16.270 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-12 10:21:10.150797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:16.270 [2024-07-12 10:21:10.150831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:16.270 passed 00:07:16.270 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-12 10:21:10.151185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.270 [2024-07-12 10:21:10.151224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.270 passed 00:07:16.270 00:07:16.270 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.270 suites 1 1 n/a 0 0 00:07:16.270 tests 6 6 6 0 0 00:07:16.270 asserts 148 148 148 0 n/a 00:07:16.270 00:07:16.270 Elapsed time = 0.001 seconds 00:07:16.270 10:21:10 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:16.270 00:07:16.270 00:07:16.270 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.270 http://cunit.sourceforge.net/ 00:07:16.270 00:07:16.270 00:07:16.270 Suite: nvme_fabric 00:07:16.270 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:16.270 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:16.270 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:16.270 Test: test_nvme_fabric_discover_probe ...passed 00:07:16.270 Test: test_nvme_fabric_qpair_connect ...[2024-07-12 10:21:10.175855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:16.270 passed 00:07:16.270 00:07:16.271 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.271 suites 1 1 n/a 0 0 00:07:16.271 tests 5 5 5 0 0 00:07:16.271 asserts 60 60 60 0 n/a 00:07:16.271 00:07:16.271 Elapsed time = 0.001 seconds 00:07:16.271 10:21:10 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:16.271 00:07:16.271 00:07:16.271 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.271 http://cunit.sourceforge.net/ 00:07:16.271 00:07:16.271 00:07:16.271 Suite: nvme_opal 00:07:16.271 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:16.271 Test: test_opal_add_short_atom_header ...passed 00:07:16.271 00:07:16.271 [2024-07-12 10:21:10.197045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:16.271 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.271 suites 1 1 n/a 0 0 00:07:16.271 tests 2 2 2 0 0 00:07:16.271 asserts 22 22 22 0 n/a 00:07:16.271 00:07:16.271 Elapsed time = 0.000 seconds 00:07:16.529 00:07:16.529 real 0m1.218s 00:07:16.529 user 0m0.714s 00:07:16.529 sys 0m0.345s 00:07:16.529 10:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.529 10:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:16.529 ************************************ 00:07:16.529 END TEST unittest_nvme 00:07:16.529 ************************************ 00:07:16.529 10:21:10 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:16.529 10:21:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.529 10:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.529 10:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:16.529 ************************************ 00:07:16.529 START TEST unittest_log 00:07:16.529 ************************************ 00:07:16.529 10:21:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:16.529 00:07:16.529 00:07:16.529 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.529 http://cunit.sourceforge.net/ 00:07:16.529 00:07:16.529 00:07:16.529 Suite: log 00:07:16.529 Test: log_test ...[2024-07-12 10:21:10.274490] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:16.529 [2024-07-12 10:21:10.274955] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:16.529 log dump test: 00:07:16.529 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:16.529 spdk dump test: 00:07:16.529 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:16.529 spdk dump test: 00:07:16.529 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:16.529 00000010 65 20 63 68 61 72 73 e chars 00:07:16.529 passed 00:07:17.465 Test: deprecation ...passed 00:07:17.465 00:07:17.465 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.465 suites 1 1 n/a 0 0 00:07:17.465 tests 2 2 2 0 0 00:07:17.465 asserts 73 73 73 0 n/a 00:07:17.465 00:07:17.465 Elapsed time = 0.001 seconds 00:07:17.465 00:07:17.465 real 0m1.036s 00:07:17.465 user 0m0.019s 00:07:17.465 sys 0m0.016s 00:07:17.465 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.465 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.465 ************************************ 00:07:17.465 END TEST unittest_log 00:07:17.465 ************************************ 00:07:17.465 10:21:11 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:17.465 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.465 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.465 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.465 ************************************ 00:07:17.465 START TEST unittest_lvol 00:07:17.465 ************************************ 00:07:17.465 10:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:17.465 00:07:17.465 00:07:17.465 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.465 http://cunit.sourceforge.net/ 00:07:17.465 00:07:17.465 00:07:17.465 Suite: lvol 00:07:17.465 Test: lvs_init_unload_success ...[2024-07-12 10:21:11.366424] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:17.465 passed 00:07:17.465 Test: lvs_init_destroy_success ...[2024-07-12 10:21:11.366938] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:17.465 passed 00:07:17.465 Test: lvs_init_opts_success ...passed 00:07:17.465 Test: lvs_unload_lvs_is_null_fail ...[2024-07-12 10:21:11.367185] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:17.465 passed 00:07:17.465 Test: lvs_names ...[2024-07-12 10:21:11.367239] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:17.465 [2024-07-12 10:21:11.367279] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:17.465 [2024-07-12 10:21:11.367489] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:17.465 passed 00:07:17.465 Test: lvol_create_destroy_success ...passed 00:07:17.465 Test: lvol_create_fail ...[2024-07-12 10:21:11.368070] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:17.465 [2024-07-12 10:21:11.368199] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:17.465 passed 00:07:17.465 Test: lvol_destroy_fail ...[2024-07-12 10:21:11.368513] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:17.465 passed 00:07:17.465 Test: lvol_close ...[2024-07-12 10:21:11.368711] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:17.465 [2024-07-12 10:21:11.368758] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:17.465 passed 00:07:17.465 Test: lvol_resize ...passed 00:07:17.465 Test: lvol_set_read_only ...passed 00:07:17.465 Test: test_lvs_load ...[2024-07-12 10:21:11.369643] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:17.465 passed 00:07:17.465 Test: lvols_load ...[2024-07-12 10:21:11.369692] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:17.465 [2024-07-12 10:21:11.369938] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:17.466 [2024-07-12 10:21:11.370062] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:17.466 passed 00:07:17.466 Test: lvol_open ...passed 00:07:17.466 Test: lvol_snapshot ...passed 00:07:17.466 Test: lvol_snapshot_fail ...[2024-07-12 10:21:11.370812] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:17.466 passed 00:07:17.466 Test: lvol_clone ...passed 00:07:17.466 Test: lvol_clone_fail ...[2024-07-12 10:21:11.371458] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:17.466 passed 00:07:17.466 Test: lvol_iter_clones ...passed 00:07:17.466 Test: lvol_refcnt ...[2024-07-12 10:21:11.372004] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2d38024b-617c-4157-90eb-a81601a76481 because it is still open 00:07:17.466 passed 00:07:17.466 Test: lvol_names ...[2024-07-12 10:21:11.372210] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:17.466 [2024-07-12 10:21:11.372314] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.466 [2024-07-12 10:21:11.372548] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:17.466 passed 00:07:17.466 Test: lvol_create_thin_provisioned ...passed 00:07:17.466 Test: lvol_rename ...[2024-07-12 10:21:11.373051] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.466 [2024-07-12 10:21:11.373191] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:17.466 passed 00:07:17.466 Test: lvs_rename ...[2024-07-12 10:21:11.373453] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:17.466 passed 00:07:17.466 Test: lvol_inflate ...[2024-07-12 10:21:11.373685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:17.466 passed 00:07:17.466 Test: lvol_decouple_parent ...[2024-07-12 10:21:11.373959] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:17.466 passed 00:07:17.466 Test: lvol_get_xattr ...passed 00:07:17.466 Test: lvol_esnap_reload ...passed 00:07:17.466 Test: lvol_esnap_create_bad_args ...[2024-07-12 10:21:11.374462] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:17.466 [2024-07-12 10:21:11.374507] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:17.466 [2024-07-12 10:21:11.374548] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:17.466 [2024-07-12 10:21:11.374652] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.466 [2024-07-12 10:21:11.374781] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:17.466 passed 00:07:17.466 Test: lvol_esnap_create_delete ...passed 00:07:17.466 Test: lvol_esnap_load_esnaps ...[2024-07-12 10:21:11.375156] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:17.466 passed 00:07:17.466 Test: lvol_esnap_missing ...[2024-07-12 10:21:11.375297] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:17.466 [2024-07-12 10:21:11.375386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:17.466 passed 00:07:17.466 Test: lvol_esnap_hotplug ... 00:07:17.466 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:17.466 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:17.466 [2024-07-12 10:21:11.376135] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 3d2dfa64-d205-49a3-bed2-962b964e9cfb: failed to create esnap bs_dev: error -12 00:07:17.466 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:17.466 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:17.466 [2024-07-12 10:21:11.376370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1fd39cd0-fd5f-4628-b2af-a0cbb2d415f9: failed to create esnap bs_dev: error -12 00:07:17.466 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:17.466 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:17.466 [2024-07-12 10:21:11.376528] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9fb55a5c-f735-41a4-9143-b1c0e609f7da: failed to create esnap bs_dev: error -12 00:07:17.466 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:17.466 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:17.466 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:17.466 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:17.466 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:17.466 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:17.466 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:17.466 passed 00:07:17.466 Test: lvol_get_by ...passed 00:07:17.466 00:07:17.466 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.466 suites 1 1 n/a 0 0 00:07:17.466 tests 34 34 34 0 0 00:07:17.466 asserts 1439 1439 1439 0 n/a 00:07:17.466 00:07:17.466 Elapsed time = 0.012 seconds 00:07:17.466 00:07:17.466 real 0m0.040s 00:07:17.466 user 0m0.021s 00:07:17.466 sys 0m0.019s 00:07:17.466 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.466 ************************************ 00:07:17.466 END TEST unittest_lvol 00:07:17.466 ************************************ 00:07:17.466 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 10:21:11 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.725 10:21:11 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:17.725 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.725 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.725 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 ************************************ 00:07:17.725 START TEST unittest_nvme_rdma 00:07:17.725 ************************************ 00:07:17.725 10:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:17.725 00:07:17.725 00:07:17.725 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.725 http://cunit.sourceforge.net/ 00:07:17.725 00:07:17.725 00:07:17.725 Suite: nvme_rdma 00:07:17.725 Test: test_nvme_rdma_build_sgl_request ...[2024-07-12 10:21:11.455599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:17.725 [2024-07-12 10:21:11.455955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:17.725 [2024-07-12 10:21:11.456051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:17.725 Test: test_nvme_rdma_build_contig_request ...passed 00:07:17.725 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:17.725 Test: test_nvme_rdma_create_reqs ...[2024-07-12 10:21:11.456122] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:17.725 [2024-07-12 10:21:11.456264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_create_rsps ...passed 00:07:17.725 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-12 10:21:11.456623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:17.725 [2024-07-12 10:21:11.456818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_poller_create ...[2024-07-12 10:21:11.456877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:07:17.725 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-12 10:21:11.457036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:17.725 Test: test_nvme_rdma_req_init ...passed 00:07:17.725 Test: test_nvme_rdma_validate_cm_event ...passed 00:07:17.725 Test: test_nvme_rdma_qpair_init ...passed 00:07:17.725 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:17.725 Test: test_nvme_rdma_memory_domain ...[2024-07-12 10:21:11.457418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:17.725 [2024-07-12 10:21:11.457464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:17.725 [2024-07-12 10:21:11.457653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:17.725 passed 00:07:17.725 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:17.725 Test: test_rdma_get_memory_translation ...[2024-07-12 10:21:11.457764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:17.725 [2024-07-12 10:21:11.457827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:17.725 passed 00:07:17.725 Test: test_get_rdma_qpair_from_wc ...passed 00:07:17.725 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:17.725 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-12 10:21:11.457938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:17.725 [2024-07-12 10:21:11.457985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:17.725 passed 00:07:17.725 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-12 10:21:11.458083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:17.725 [2024-07-12 10:21:11.458123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:17.725 [2024-07-12 10:21:11.458151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd57658710 on poll group 0x60b0000001a0 00:07:17.725 [2024-07-12 10:21:11.458214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:17.725 passed 00:07:17.725 00:07:17.725 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.725 suites 1 1 n/a 0 0 00:07:17.725 tests 22 22 22 0 0 00:07:17.725 asserts 412 412 412 0 n/a 00:07:17.725 00:07:17.725 Elapsed time = 0.003 seconds 00:07:17.725 [2024-07-12 10:21:11.458260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:17.725 [2024-07-12 10:21:11.458285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd57658710 on poll group 0x60b0000001a0 00:07:17.725 [2024-07-12 10:21:11.458364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:17.725 00:07:17.725 real 0m0.033s 00:07:17.725 user 0m0.020s 00:07:17.725 sys 0m0.014s 00:07:17.725 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.725 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 ************************************ 00:07:17.725 END TEST unittest_nvme_rdma 00:07:17.725 ************************************ 00:07:17.725 10:21:11 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:17.725 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.725 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.725 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 ************************************ 00:07:17.726 START TEST unittest_nvmf_transport 00:07:17.726 ************************************ 00:07:17.726 10:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:17.726 00:07:17.726 00:07:17.726 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.726 http://cunit.sourceforge.net/ 00:07:17.726 00:07:17.726 00:07:17.726 Suite: nvmf 00:07:17.726 Test: test_spdk_nvmf_transport_create ...[2024-07-12 10:21:11.541733] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:17.726 [2024-07-12 10:21:11.542023] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:17.726 [2024-07-12 10:21:11.542078] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:17.726 [2024-07-12 10:21:11.542179] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:17.726 passed 00:07:17.726 Test: test_nvmf_transport_poll_group_create ...passed 00:07:17.726 Test: test_spdk_nvmf_transport_opts_init ...passed 00:07:17.726 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-12 10:21:11.542396] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:17.726 [2024-07-12 10:21:11.542472] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:17.726 [2024-07-12 10:21:11.542503] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:17.726 passed 00:07:17.726 00:07:17.726 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.726 suites 1 1 n/a 0 0 00:07:17.726 tests 4 4 4 0 0 00:07:17.726 asserts 49 49 49 0 n/a 00:07:17.726 00:07:17.726 Elapsed time = 0.001 seconds 00:07:17.726 00:07:17.726 real 0m0.039s 00:07:17.726 user 0m0.022s 00:07:17.726 sys 0m0.018s 00:07:17.726 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.726 ************************************ 00:07:17.726 END TEST unittest_nvmf_transport 00:07:17.726 ************************************ 00:07:17.726 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.726 10:21:11 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:17.726 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.726 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.726 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.726 ************************************ 00:07:17.726 START TEST unittest_rdma 00:07:17.726 ************************************ 00:07:17.726 10:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:17.726 00:07:17.726 00:07:17.726 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.726 http://cunit.sourceforge.net/ 00:07:17.726 00:07:17.726 00:07:17.726 Suite: rdma_common 00:07:17.726 Test: test_spdk_rdma_pd ...[2024-07-12 10:21:11.623774] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:17.726 [2024-07-12 10:21:11.624066] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:17.726 passed 00:07:17.726 00:07:17.726 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.726 suites 1 1 n/a 0 0 00:07:17.726 tests 1 1 1 0 0 00:07:17.726 asserts 31 31 31 0 n/a 00:07:17.726 00:07:17.726 Elapsed time = 0.000 seconds 00:07:17.726 00:07:17.726 real 0m0.022s 00:07:17.726 user 0m0.005s 00:07:17.726 sys 0m0.018s 00:07:17.726 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.726 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.726 ************************************ 00:07:17.726 END TEST unittest_rdma 00:07:17.726 ************************************ 00:07:17.985 10:21:11 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.985 10:21:11 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:17.985 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.985 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.985 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.985 ************************************ 00:07:17.985 START TEST unittest_nvme_cuse 00:07:17.985 ************************************ 00:07:17.985 10:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:17.985 00:07:17.985 00:07:17.985 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.985 http://cunit.sourceforge.net/ 00:07:17.985 00:07:17.985 00:07:17.985 Suite: nvme_cuse 00:07:17.985 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:17.985 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:17.985 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:17.985 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:17.985 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:17.985 Test: test_cuse_nvme_submit_io ...[2024-07-12 10:21:11.701387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:17.985 passed 00:07:17.985 Test: test_cuse_nvme_reset ...[2024-07-12 10:21:11.701657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:17.985 passed 00:07:17.985 Test: test_nvme_cuse_stop ...passed 00:07:17.985 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:17.985 00:07:17.985 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.985 suites 1 1 n/a 0 0 00:07:17.985 tests 9 9 9 0 0 00:07:17.985 asserts 121 121 121 0 n/a 00:07:17.985 00:07:17.985 Elapsed time = 0.001 seconds 00:07:17.985 00:07:17.985 real 0m0.032s 00:07:17.985 user 0m0.018s 00:07:17.985 sys 0m0.014s 00:07:17.985 10:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.985 ************************************ 00:07:17.985 END TEST unittest_nvme_cuse 00:07:17.985 ************************************ 00:07:17.985 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.985 10:21:11 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:07:17.985 10:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.985 10:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.985 10:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:17.985 ************************************ 00:07:17.985 START TEST unittest_nvmf 00:07:17.985 ************************************ 00:07:17.985 10:21:11 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:07:17.985 10:21:11 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:17.985 00:07:17.985 00:07:17.985 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.985 http://cunit.sourceforge.net/ 00:07:17.985 00:07:17.985 00:07:17.985 Suite: nvmf 00:07:17.985 Test: test_get_log_page ...[2024-07-12 10:21:11.784321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:17.985 passed 00:07:17.985 Test: test_process_fabrics_cmd ...passed 00:07:17.985 Test: test_connect ...[2024-07-12 10:21:11.785513] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:17.985 [2024-07-12 10:21:11.785745] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:17.985 [2024-07-12 10:21:11.785901] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:17.985 [2024-07-12 10:21:11.786030] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:17.985 [2024-07-12 10:21:11.786271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:17.985 [2024-07-12 10:21:11.786416] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:17.985 [2024-07-12 10:21:11.786622] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:17.985 [2024-07-12 10:21:11.786777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:17.985 [2024-07-12 10:21:11.786997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:17.985 [2024-07-12 10:21:11.787101] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:17.985 [2024-07-12 10:21:11.787590] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:17.985 [2024-07-12 10:21:11.787799] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:17.985 [2024-07-12 10:21:11.787996] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:17.985 [2024-07-12 10:21:11.788096] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:17.985 [2024-07-12 10:21:11.788323] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:17.985 [2024-07-12 10:21:11.788595] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:17.985 passed 00:07:17.985 Test: test_get_ns_id_desc_list ...passed 00:07:17.985 Test: test_identify_ns ...[2024-07-12 10:21:11.789045] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.985 [2024-07-12 10:21:11.789374] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:17.985 [2024-07-12 10:21:11.789652] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:17.985 passed 00:07:17.985 Test: test_identify_ns_iocs_specific ...[2024-07-12 10:21:11.790081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.985 [2024-07-12 10:21:11.790402] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.985 passed 00:07:17.985 Test: test_reservation_write_exclusive ...passed 00:07:17.985 Test: test_reservation_exclusive_access ...passed 00:07:17.985 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:17.985 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:17.985 Test: test_reservation_notification_log_page ...passed 00:07:17.985 Test: test_get_dif_ctx ...passed 00:07:17.985 Test: test_set_get_features ...[2024-07-12 10:21:11.792225] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:17.985 [2024-07-12 10:21:11.792375] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:17.985 [2024-07-12 10:21:11.792456] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:17.985 [2024-07-12 10:21:11.792586] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:17.985 passed 00:07:17.985 Test: test_identify_ctrlr ...passed 00:07:17.985 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:17.985 Test: test_custom_admin_cmd ...passed 00:07:17.985 Test: test_fused_compare_and_write ...[2024-07-12 10:21:11.793581] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:17.985 [2024-07-12 10:21:11.793741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:17.985 [2024-07-12 10:21:11.793911] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:17.985 passed 00:07:17.985 Test: test_multi_async_event_reqs ...passed 00:07:17.986 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:17.986 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:17.986 Test: test_multi_async_events ...passed 00:07:17.986 Test: test_rae ...passed 00:07:17.986 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:17.986 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:17.986 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-12 10:21:11.795647] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:17.986 passed 00:07:17.986 Test: test_zcopy_read ...passed 00:07:17.986 Test: test_zcopy_write ...passed 00:07:17.986 Test: test_nvmf_property_set ...passed 00:07:17.986 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-12 10:21:11.796526] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:17.986 passed[2024-07-12 10:21:11.796652] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:17.986 00:07:17.986 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-12 10:21:11.796863] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:17.986 [2024-07-12 10:21:11.797010] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:17.986 [2024-07-12 10:21:11.797166] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:17.986 passed 00:07:17.986 00:07:17.986 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.986 suites 1 1 n/a 0 0 00:07:17.986 tests 30 30 30 0 0 00:07:17.986 asserts 885 885 885 0 n/a 00:07:17.986 00:07:17.986 Elapsed time = 0.007 seconds 00:07:17.986 10:21:11 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:17.986 00:07:17.986 00:07:17.986 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.986 http://cunit.sourceforge.net/ 00:07:17.986 00:07:17.986 00:07:17.986 Suite: nvmf 00:07:17.986 Test: test_get_rw_params ...passed 00:07:17.986 Test: test_lba_in_range ...passed 00:07:17.986 Test: test_get_dif_ctx ...passed 00:07:17.986 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:17.986 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-12 10:21:11.829140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:17.986 passed 00:07:17.986 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-12 10:21:11.829421] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:17.986 [2024-07-12 10:21:11.829507] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:17.986 passed 00:07:17.986 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-12 10:21:11.829563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:17.986 [2024-07-12 10:21:11.829636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:17.986 [2024-07-12 10:21:11.829731] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:17.986 [2024-07-12 10:21:11.829757] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:17.986 [2024-07-12 10:21:11.829813] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:17.986 passed 00:07:17.986 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:17.986 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:17.986 00:07:17.986 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.986 suites 1 1 n/a 0 0 00:07:17.986 tests 9 9 9 0 0 00:07:17.986 asserts 157 157 157 0 n/a 00:07:17.986 00:07:17.986 Elapsed time = 0.001 seconds 00:07:17.986 [2024-07-12 10:21:11.829846] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:17.986 10:21:11 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:17.986 00:07:17.986 00:07:17.986 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.986 http://cunit.sourceforge.net/ 00:07:17.986 00:07:17.986 00:07:17.986 Suite: nvmf 00:07:17.986 Test: test_discovery_log ...passed 00:07:17.986 Test: test_discovery_log_with_filters ...passed 00:07:17.986 00:07:17.986 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.986 suites 1 1 n/a 0 0 00:07:17.986 tests 2 2 2 0 0 00:07:17.986 asserts 238 238 238 0 n/a 00:07:17.986 00:07:17.986 Elapsed time = 0.003 seconds 00:07:17.986 10:21:11 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:17.986 00:07:17.986 00:07:17.986 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.986 http://cunit.sourceforge.net/ 00:07:17.986 00:07:17.986 00:07:17.986 Suite: nvmf 00:07:17.986 Test: nvmf_test_create_subsystem ...[2024-07-12 10:21:11.901958] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:17.986 [2024-07-12 10:21:11.902246] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:17.986 [2024-07-12 10:21:11.902320] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:17.986 [2024-07-12 10:21:11.902352] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:17.986 [2024-07-12 10:21:11.902374] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:17.986 [2024-07-12 10:21:11.902403] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:17.986 [2024-07-12 10:21:11.902523] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:17.986 passed 00:07:17.986 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-12 10:21:11.902676] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:17.986 [2024-07-12 10:21:11.902765] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:17.986 [2024-07-12 10:21:11.902796] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:17.986 [2024-07-12 10:21:11.902816] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:17.986 [2024-07-12 10:21:11.902957] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:17.986 passed 00:07:17.986 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:17.986 Test: test_reservation_register ...[2024-07-12 10:21:11.903051] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:17.986 [2024-07-12 10:21:11.903286] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_register_with_ptpl ...[2024-07-12 10:21:11.903412] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:17.986 passed 00:07:17.986 Test: test_reservation_acquire_preempt_1 ...[2024-07-12 10:21:11.904403] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:17.986 Test: test_reservation_release ...[2024-07-12 10:21:11.906063] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_unregister_notification ...[2024-07-12 10:21:11.906306] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_release_notification ...[2024-07-12 10:21:11.906587] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_release_notification_write_exclusive ...[2024-07-12 10:21:11.906842] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_clear_notification ...[2024-07-12 10:21:11.907084] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_reservation_preempt_notification ...[2024-07-12 10:21:11.907298] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.986 passed 00:07:17.986 Test: test_spdk_nvmf_ns_event ...passed 00:07:17.986 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:17.986 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:17.986 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-12 10:21:11.908043] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:17.986 [2024-07-12 10:21:11.908137] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:17.986 passed 00:07:17.986 Test: test_nvmf_ns_reservation_report ...passed 00:07:17.986 Test: test_nvmf_nqn_is_valid ...passed 00:07:17.986 Test: test_nvmf_ns_reservation_restore ...[2024-07-12 10:21:11.908279] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:17.986 [2024-07-12 10:21:11.908360] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:17.986 [2024-07-12 10:21:11.908393] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:140ab15a-320d-40c3-93e0-94fe48bb89b": uuid is not the correct length 00:07:17.986 [2024-07-12 10:21:11.908438] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:17.986 [2024-07-12 10:21:11.908557] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:17.986 passed 00:07:17.986 Test: test_nvmf_subsystem_state_change ...passed 00:07:17.986 Test: test_nvmf_reservation_custom_ops ...passed 00:07:17.987 00:07:17.987 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.987 suites 1 1 n/a 0 0 00:07:17.987 tests 22 22 22 0 0 00:07:17.987 asserts 407 407 407 0 n/a 00:07:17.987 00:07:17.987 Elapsed time = 0.008 seconds 00:07:18.250 10:21:11 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:18.250 00:07:18.250 00:07:18.250 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.250 http://cunit.sourceforge.net/ 00:07:18.250 00:07:18.250 00:07:18.250 Suite: nvmf 00:07:18.250 Test: test_nvmf_tcp_create ...[2024-07-12 10:21:11.963033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:18.250 passed 00:07:18.250 Test: test_nvmf_tcp_destroy ...passed 00:07:18.250 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:18.250 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:18.250 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:18.250 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:18.250 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:18.250 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-12 10:21:12.059703] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.059800] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 passed 00:07:18.250 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:18.250 Test: test_nvmf_tcp_icreq_handle ...[2024-07-12 10:21:12.059906] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.059961] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060001] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060096] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:18.250 [2024-07-12 10:21:12.060192] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060244] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060271] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:18.250 [2024-07-12 10:21:12.060301] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060323] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060353] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:18.250 passed 00:07:18.250 Test: test_nvmf_tcp_check_xfer_type ...[2024-07-12 10:21:12.060431] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 passed 00:07:18.250 Test: test_nvmf_tcp_invalid_sgl ...passed 00:07:18.250 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-12 10:21:12.060503] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:18.250 [2024-07-12 10:21:12.060539] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060562] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c3123d0 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060609] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe5c313130 00:07:18.250 [2024-07-12 10:21:12.060696] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060740] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060774] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe5c312890 00:07:18.250 [2024-07-12 10:21:12.060799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060835] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060868] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:18.250 [2024-07-12 10:21:12.060901] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.060938] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.250 [2024-07-12 10:21:12.060979] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:18.250 [2024-07-12 10:21:12.061005] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.250 [2024-07-12 10:21:12.061034] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061062] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061116] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061171] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061194] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061231] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061253] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061306] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061355] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061390] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 [2024-07-12 10:21:12.061434] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:18.251 [2024-07-12 10:21:12.061461] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5c312890 is same with the state(5) to be set 00:07:18.251 passed 00:07:18.251 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:07:18.251 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-12 10:21:12.079215] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:18.251 [2024-07-12 10:21:12.079274] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:18.251 passed 00:07:18.251 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-12 10:21:12.079512] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:18.251 [2024-07-12 10:21:12.079555] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:18.251 passed 00:07:18.251 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:07:18.251 00:07:18.251 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.251 suites 1 1 n/a 0 0 00:07:18.251 tests 17 17 17 0 0 00:07:18.251 asserts 222 222 222 0 n/a 00:07:18.251 00:07:18.251 Elapsed time = 0.139 seconds 00:07:18.251 [2024-07-12 10:21:12.079719] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:18.251 [2024-07-12 10:21:12.079765] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:18.251 10:21:12 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:18.251 00:07:18.251 00:07:18.251 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.251 http://cunit.sourceforge.net/ 00:07:18.251 00:07:18.251 00:07:18.251 Suite: nvmf 00:07:18.251 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:18.251 00:07:18.251 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.251 suites 1 1 n/a 0 0 00:07:18.251 tests 1 1 1 0 0 00:07:18.251 asserts 17 17 17 0 n/a 00:07:18.251 00:07:18.251 Elapsed time = 0.021 seconds 00:07:18.535 00:07:18.535 real 0m0.456s 00:07:18.535 user 0m0.217s 00:07:18.535 sys 0m0.236s 00:07:18.535 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.535 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.535 ************************************ 00:07:18.535 END TEST unittest_nvmf 00:07:18.535 ************************************ 00:07:18.535 10:21:12 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.535 10:21:12 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.535 10:21:12 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:18.535 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.535 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.535 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.535 ************************************ 00:07:18.535 START TEST unittest_nvmf_rdma 00:07:18.535 ************************************ 00:07:18.535 10:21:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:18.535 00:07:18.535 00:07:18.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.535 http://cunit.sourceforge.net/ 00:07:18.535 00:07:18.535 00:07:18.535 Suite: nvmf 00:07:18.535 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-12 10:21:12.294839] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:18.535 [2024-07-12 10:21:12.295326] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:18.535 [2024-07-12 10:21:12.295530] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:18.535 passed 00:07:18.535 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:18.535 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:18.535 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:18.535 Test: test_nvmf_rdma_opts_init ...passed 00:07:18.535 Test: test_nvmf_rdma_request_free_data ...passed 00:07:18.535 Test: test_nvmf_rdma_update_ibv_state ...[2024-07-12 10:21:12.298070] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:18.535 [2024-07-12 10:21:12.298219] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:18.535 passed 00:07:18.535 Test: test_nvmf_rdma_resources_create ...passed 00:07:18.535 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:18.535 Test: test_nvmf_rdma_resize_cq ...[2024-07-12 10:21:12.300327] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:18.535 Using CQ of insufficient size may lead to CQ overrun 00:07:18.535 [2024-07-12 10:21:12.300541] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:18.535 [2024-07-12 10:21:12.300701] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:18.535 passed 00:07:18.535 00:07:18.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.535 suites 1 1 n/a 0 0 00:07:18.535 tests 10 10 10 0 0 00:07:18.535 asserts 584 584 584 0 n/a 00:07:18.535 00:07:18.535 Elapsed time = 0.004 seconds 00:07:18.535 00:07:18.535 real 0m0.046s 00:07:18.535 user 0m0.012s 00:07:18.535 sys 0m0.032s 00:07:18.535 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.535 ************************************ 00:07:18.535 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.535 END TEST unittest_nvmf_rdma 00:07:18.535 ************************************ 00:07:18.535 10:21:12 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.535 10:21:12 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:07:18.535 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.535 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.535 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.535 ************************************ 00:07:18.535 START TEST unittest_scsi 00:07:18.535 ************************************ 00:07:18.535 10:21:12 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:07:18.535 10:21:12 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:18.535 00:07:18.535 00:07:18.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.535 http://cunit.sourceforge.net/ 00:07:18.535 00:07:18.535 00:07:18.535 Suite: dev_suite 00:07:18.535 Test: dev_destruct_null_dev ...passed 00:07:18.535 Test: dev_destruct_zero_luns ...passed 00:07:18.535 Test: dev_destruct_null_lun ...passed 00:07:18.535 Test: dev_destruct_success ...passed 00:07:18.535 Test: dev_construct_num_luns_zero ...[2024-07-12 10:21:12.391476] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:18.535 passed 00:07:18.535 Test: dev_construct_no_lun_zero ...passed[2024-07-12 10:21:12.392137] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:18.535 00:07:18.535 Test: dev_construct_null_lun ...[2024-07-12 10:21:12.392349] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:18.535 passed 00:07:18.535 Test: dev_construct_name_too_long ...[2024-07-12 10:21:12.392519] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:18.535 passed 00:07:18.535 Test: dev_construct_success ...passed 00:07:18.535 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:18.535 Test: dev_queue_mgmt_task_success ...passed 00:07:18.535 Test: dev_queue_task_success ...passed 00:07:18.535 Test: dev_stop_success ...passed 00:07:18.535 Test: dev_add_port_max_ports ...[2024-07-12 10:21:12.393837] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:18.535 passed 00:07:18.535 Test: dev_add_port_construct_failure1 ...[2024-07-12 10:21:12.394299] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:18.535 passed 00:07:18.535 Test: dev_add_port_construct_failure2 ...[2024-07-12 10:21:12.394631] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:18.535 passed 00:07:18.535 Test: dev_add_port_success1 ...passed 00:07:18.535 Test: dev_add_port_success2 ...passed 00:07:18.535 Test: dev_add_port_success3 ...passed 00:07:18.535 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:18.535 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:18.535 Test: dev_find_port_by_id_success ...passed 00:07:18.535 Test: dev_add_lun_bdev_not_found ...passed 00:07:18.535 Test: dev_add_lun_no_free_lun_id ...[2024-07-12 10:21:12.395948] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:18.535 passed 00:07:18.535 Test: dev_add_lun_success1 ...passed 00:07:18.535 Test: dev_add_lun_success2 ...passed 00:07:18.535 Test: dev_check_pending_tasks ...passed 00:07:18.535 Test: dev_iterate_luns ...passed 00:07:18.535 Test: dev_find_free_lun ...passed 00:07:18.535 00:07:18.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.535 suites 1 1 n/a 0 0 00:07:18.535 tests 29 29 29 0 0 00:07:18.535 asserts 97 97 97 0 n/a 00:07:18.535 00:07:18.535 Elapsed time = 0.003 seconds 00:07:18.535 10:21:12 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:18.535 00:07:18.535 00:07:18.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.535 http://cunit.sourceforge.net/ 00:07:18.535 00:07:18.535 00:07:18.535 Suite: lun_suite 00:07:18.535 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-12 10:21:12.428386] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:18.535 passed 00:07:18.535 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-12 10:21:12.428775] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:18.535 passed 00:07:18.535 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:18.535 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:18.535 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:18.535 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:07:18.535 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:18.535 Test: lun_append_task_null_lun_not_supported ...[2024-07-12 10:21:12.428971] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:18.535 passed 00:07:18.535 Test: lun_execute_scsi_task_pending ...passed 00:07:18.535 Test: lun_execute_scsi_task_complete ...passed 00:07:18.535 Test: lun_execute_scsi_task_resize ...passed 00:07:18.535 Test: lun_destruct_success ...passed 00:07:18.535 Test: lun_construct_null_ctx ...passed 00:07:18.535 Test: lun_construct_success ...passed 00:07:18.535 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-12 10:21:12.429195] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:18.535 passed 00:07:18.535 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:18.535 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:18.535 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:18.535 00:07:18.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.535 suites 1 1 n/a 0 0 00:07:18.535 tests 18 18 18 0 0 00:07:18.535 asserts 153 153 153 0 n/a 00:07:18.535 00:07:18.535 Elapsed time = 0.001 seconds 00:07:18.536 10:21:12 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:18.805 00:07:18.805 00:07:18.805 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.805 http://cunit.sourceforge.net/ 00:07:18.805 00:07:18.805 00:07:18.805 Suite: scsi_suite 00:07:18.805 Test: scsi_init ...passed 00:07:18.805 00:07:18.805 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.805 suites 1 1 n/a 0 0 00:07:18.805 tests 1 1 1 0 0 00:07:18.805 asserts 1 1 1 0 n/a 00:07:18.805 00:07:18.805 Elapsed time = 0.000 seconds 00:07:18.805 10:21:12 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:18.805 00:07:18.805 00:07:18.805 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.805 http://cunit.sourceforge.net/ 00:07:18.805 00:07:18.805 00:07:18.805 Suite: translation_suite 00:07:18.805 Test: mode_select_6_test ...passed 00:07:18.805 Test: mode_select_6_test2 ...passed 00:07:18.805 Test: mode_sense_6_test ...passed 00:07:18.805 Test: mode_sense_10_test ...passed 00:07:18.805 Test: inquiry_evpd_test ...passed 00:07:18.805 Test: inquiry_standard_test ...passed 00:07:18.805 Test: inquiry_overflow_test ...passed 00:07:18.805 Test: task_complete_test ...passed 00:07:18.805 Test: lba_range_test ...passed 00:07:18.805 Test: xfer_len_test ...[2024-07-12 10:21:12.498177] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:18.805 passed 00:07:18.805 Test: xfer_test ...passed 00:07:18.805 Test: scsi_name_padding_test ...passed 00:07:18.805 Test: get_dif_ctx_test ...passed 00:07:18.806 Test: unmap_split_test ...passed 00:07:18.806 00:07:18.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.806 suites 1 1 n/a 0 0 00:07:18.806 tests 14 14 14 0 0 00:07:18.806 asserts 1200 1200 1200 0 n/a 00:07:18.806 00:07:18.806 Elapsed time = 0.004 seconds 00:07:18.806 10:21:12 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:18.806 00:07:18.806 00:07:18.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.806 http://cunit.sourceforge.net/ 00:07:18.806 00:07:18.806 00:07:18.806 Suite: reservation_suite 00:07:18.806 Test: test_reservation_register ...[2024-07-12 10:21:12.529451] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 passed 00:07:18.806 Test: test_reservation_reserve ...[2024-07-12 10:21:12.529797] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 [2024-07-12 10:21:12.529880] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:18.806 passed 00:07:18.806 Test: test_reservation_preempt_non_all_regs ...passed 00:07:18.806 Test: test_reservation_preempt_all_regs ...[2024-07-12 10:21:12.529963] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:18.806 [2024-07-12 10:21:12.530023] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 [2024-07-12 10:21:12.530090] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:18.806 [2024-07-12 10:21:12.530205] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 passed 00:07:18.806 Test: test_reservation_cmds_conflict ...passed 00:07:18.806 Test: test_scsi2_reserve_release ...passed 00:07:18.806 Test: test_pr_with_scsi2_reserve_release ...passed 00:07:18.806 00:07:18.806 [2024-07-12 10:21:12.530335] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 [2024-07-12 10:21:12.530390] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:18.806 [2024-07-12 10:21:12.530427] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:18.806 [2024-07-12 10:21:12.530450] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:18.806 [2024-07-12 10:21:12.530480] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:18.806 [2024-07-12 10:21:12.530500] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:18.806 [2024-07-12 10:21:12.530577] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.806 suites 1 1 n/a 0 0 00:07:18.806 tests 7 7 7 0 0 00:07:18.806 asserts 257 257 257 0 n/a 00:07:18.806 00:07:18.806 Elapsed time = 0.001 seconds 00:07:18.806 00:07:18.806 real 0m0.167s 00:07:18.806 user 0m0.072s 00:07:18.806 sys 0m0.093s 00:07:18.806 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.806 ************************************ 00:07:18.806 END TEST unittest_scsi 00:07:18.806 ************************************ 00:07:18.806 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 10:21:12 -- unit/unittest.sh@276 -- # uname -s 00:07:18.806 10:21:12 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:07:18.806 10:21:12 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:07:18.806 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.806 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.806 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 ************************************ 00:07:18.806 START TEST unittest_sock 00:07:18.806 ************************************ 00:07:18.806 10:21:12 -- common/autotest_common.sh@1104 -- # unittest_sock 00:07:18.806 10:21:12 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:18.806 00:07:18.806 00:07:18.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.806 http://cunit.sourceforge.net/ 00:07:18.806 00:07:18.806 00:07:18.806 Suite: sock 00:07:18.806 Test: posix_sock ...passed 00:07:18.806 Test: ut_sock ...passed 00:07:18.806 Test: posix_sock_group ...passed 00:07:18.806 Test: ut_sock_group ...passed 00:07:18.806 Test: posix_sock_group_fairness ...passed 00:07:18.806 Test: _posix_sock_close ...passed 00:07:18.806 Test: sock_get_default_opts ...passed 00:07:18.806 Test: ut_sock_impl_get_set_opts ...passed 00:07:18.806 Test: posix_sock_impl_get_set_opts ...passed 00:07:18.806 Test: ut_sock_map ...passed 00:07:18.806 Test: override_impl_opts ...passed 00:07:18.806 Test: ut_sock_group_get_ctx ...passed 00:07:18.806 00:07:18.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.806 suites 1 1 n/a 0 0 00:07:18.806 tests 12 12 12 0 0 00:07:18.806 asserts 349 349 349 0 n/a 00:07:18.806 00:07:18.806 Elapsed time = 0.008 seconds 00:07:18.806 10:21:12 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:18.806 00:07:18.806 00:07:18.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.806 http://cunit.sourceforge.net/ 00:07:18.806 00:07:18.806 00:07:18.806 Suite: posix 00:07:18.806 Test: flush ...passed 00:07:18.806 00:07:18.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.806 suites 1 1 n/a 0 0 00:07:18.806 tests 1 1 1 0 0 00:07:18.806 asserts 28 28 28 0 n/a 00:07:18.806 00:07:18.806 Elapsed time = 0.000 seconds 00:07:18.806 10:21:12 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.806 00:07:18.806 real 0m0.105s 00:07:18.806 user 0m0.045s 00:07:18.806 sys 0m0.034s 00:07:18.806 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.806 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 ************************************ 00:07:18.806 END TEST unittest_sock 00:07:18.806 ************************************ 00:07:19.065 10:21:12 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:19.065 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.065 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.065 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.065 ************************************ 00:07:19.065 START TEST unittest_thread 00:07:19.065 ************************************ 00:07:19.065 10:21:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:19.065 00:07:19.065 00:07:19.065 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.065 http://cunit.sourceforge.net/ 00:07:19.065 00:07:19.065 00:07:19.065 Suite: io_channel 00:07:19.065 Test: thread_alloc ...passed 00:07:19.065 Test: thread_send_msg ...passed 00:07:19.065 Test: thread_poller ...passed 00:07:19.065 Test: poller_pause ...passed 00:07:19.065 Test: thread_for_each ...passed 00:07:19.065 Test: for_each_channel_remove ...passed 00:07:19.065 Test: for_each_channel_unreg ...[2024-07-12 10:21:12.789676] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7fff095dbaa0 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:19.065 passed 00:07:19.065 Test: thread_name ...passed 00:07:19.065 Test: channel ...[2024-07-12 10:21:12.794428] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x56362309e0e0 00:07:19.065 passed 00:07:19.065 Test: channel_destroy_races ...passed 00:07:19.065 Test: thread_exit_test ...[2024-07-12 10:21:12.800222] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:19.065 passed 00:07:19.065 Test: thread_update_stats_test ...passed 00:07:19.065 Test: nested_channel ...passed 00:07:19.065 Test: device_unregister_and_thread_exit_race ...passed 00:07:19.065 Test: cache_closest_timed_poller ...passed 00:07:19.065 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:19.065 Test: io_device_lookup ...passed 00:07:19.065 Test: spdk_spin ...[2024-07-12 10:21:12.813077] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:19.065 [2024-07-12 10:21:12.813247] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff095dba90 00:07:19.065 [2024-07-12 10:21:12.813390] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:19.065 [2024-07-12 10:21:12.815189] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:19.065 [2024-07-12 10:21:12.815401] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff095dba90 00:07:19.065 [2024-07-12 10:21:12.815469] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:19.065 [2024-07-12 10:21:12.815615] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff095dba90 00:07:19.065 [2024-07-12 10:21:12.815680] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:19.065 [2024-07-12 10:21:12.815743] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff095dba90 00:07:19.065 [2024-07-12 10:21:12.815907] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:19.065 [2024-07-12 10:21:12.816043] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff095dba90 00:07:19.065 passed 00:07:19.065 Test: for_each_channel_and_thread_exit_race ...passed 00:07:19.065 Test: for_each_thread_and_thread_exit_race ...passed 00:07:19.065 00:07:19.065 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.065 suites 1 1 n/a 0 0 00:07:19.065 tests 20 20 20 0 0 00:07:19.065 asserts 409 409 409 0 n/a 00:07:19.065 00:07:19.065 Elapsed time = 0.051 seconds 00:07:19.065 00:07:19.065 real 0m0.097s 00:07:19.065 user 0m0.067s 00:07:19.065 sys 0m0.025s 00:07:19.065 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.065 ************************************ 00:07:19.065 END TEST unittest_thread 00:07:19.065 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.065 ************************************ 00:07:19.065 10:21:12 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:19.065 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.065 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.065 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.065 ************************************ 00:07:19.065 START TEST unittest_iobuf 00:07:19.065 ************************************ 00:07:19.065 10:21:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:19.065 00:07:19.065 00:07:19.065 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.065 http://cunit.sourceforge.net/ 00:07:19.065 00:07:19.065 00:07:19.065 Suite: io_channel 00:07:19.065 Test: iobuf ...passed 00:07:19.065 Test: iobuf_cache ...[2024-07-12 10:21:12.920756] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:19.065 [2024-07-12 10:21:12.921223] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:19.065 [2024-07-12 10:21:12.921480] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:19.065 [2024-07-12 10:21:12.921670] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:19.065 [2024-07-12 10:21:12.921768] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:19.065 [2024-07-12 10:21:12.921974] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:19.065 passed 00:07:19.065 00:07:19.065 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.065 suites 1 1 n/a 0 0 00:07:19.065 tests 2 2 2 0 0 00:07:19.065 asserts 107 107 107 0 n/a 00:07:19.065 00:07:19.065 Elapsed time = 0.006 seconds 00:07:19.065 00:07:19.065 real 0m0.044s 00:07:19.065 user 0m0.028s 00:07:19.065 sys 0m0.016s 00:07:19.065 10:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.065 ************************************ 00:07:19.065 END TEST unittest_iobuf 00:07:19.065 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.065 ************************************ 00:07:19.065 10:21:12 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:19.065 10:21:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.065 10:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.065 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.065 ************************************ 00:07:19.065 START TEST unittest_util 00:07:19.065 ************************************ 00:07:19.065 10:21:12 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:19.065 10:21:12 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: base64 00:07:19.323 Test: test_base64_get_encoded_strlen ...passed 00:07:19.323 Test: test_base64_get_decoded_len ...passed 00:07:19.323 Test: test_base64_encode ...passed 00:07:19.323 Test: test_base64_decode ...passed 00:07:19.323 Test: test_base64_urlsafe_encode ...passed 00:07:19.323 Test: test_base64_urlsafe_decode ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 6 6 6 0 0 00:07:19.323 asserts 112 112 112 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.000 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: bit_array 00:07:19.323 Test: test_1bit ...passed 00:07:19.323 Test: test_64bit ...passed 00:07:19.323 Test: test_find ...passed 00:07:19.323 Test: test_resize ...passed 00:07:19.323 Test: test_errors ...passed 00:07:19.323 Test: test_count ...passed 00:07:19.323 Test: test_mask_store_load ...passed 00:07:19.323 Test: test_mask_clear ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 8 8 8 0 0 00:07:19.323 asserts 5075 5075 5075 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.002 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: cpuset 00:07:19.323 Test: test_cpuset ...passed 00:07:19.323 Test: test_cpuset_parse ...[2024-07-12 10:21:13.056330] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:19.323 [2024-07-12 10:21:13.056586] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:19.323 [2024-07-12 10:21:13.056668] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:19.323 [2024-07-12 10:21:13.056744] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:19.323 [2024-07-12 10:21:13.056770] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:19.323 [2024-07-12 10:21:13.056802] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:19.323 [2024-07-12 10:21:13.056825] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:19.323 [2024-07-12 10:21:13.056869] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:19.323 passed 00:07:19.323 Test: test_cpuset_fmt ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 3 3 3 0 0 00:07:19.323 asserts 65 65 65 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.002 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: crc16 00:07:19.323 Test: test_crc16_t10dif ...passed 00:07:19.323 Test: test_crc16_t10dif_seed ...passed 00:07:19.323 Test: test_crc16_t10dif_copy ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 3 3 3 0 0 00:07:19.323 asserts 5 5 5 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.000 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: crc32_ieee 00:07:19.323 Test: test_crc32_ieee ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 1 1 1 0 0 00:07:19.323 asserts 1 1 1 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.000 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: crc32c 00:07:19.323 Test: test_crc32c ...passed 00:07:19.323 Test: test_crc32c_nvme ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 2 2 2 0 0 00:07:19.323 asserts 16 16 16 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.000 seconds 00:07:19.323 10:21:13 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:19.323 00:07:19.323 00:07:19.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.323 http://cunit.sourceforge.net/ 00:07:19.323 00:07:19.323 00:07:19.323 Suite: crc64 00:07:19.323 Test: test_crc64_nvme ...passed 00:07:19.323 00:07:19.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.323 suites 1 1 n/a 0 0 00:07:19.323 tests 1 1 1 0 0 00:07:19.323 asserts 4 4 4 0 n/a 00:07:19.323 00:07:19.323 Elapsed time = 0.000 seconds 00:07:19.324 10:21:13 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:19.324 00:07:19.324 00:07:19.324 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.324 http://cunit.sourceforge.net/ 00:07:19.324 00:07:19.324 00:07:19.324 Suite: string 00:07:19.324 Test: test_parse_ip_addr ...passed 00:07:19.324 Test: test_str_chomp ...passed 00:07:19.324 Test: test_parse_capacity ...passed 00:07:19.324 Test: test_sprintf_append_realloc ...passed 00:07:19.324 Test: test_strtol ...passed 00:07:19.324 Test: test_strtoll ...passed 00:07:19.324 Test: test_strarray ...passed 00:07:19.324 Test: test_strcpy_replace ...passed 00:07:19.324 00:07:19.324 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.324 suites 1 1 n/a 0 0 00:07:19.324 tests 8 8 8 0 0 00:07:19.324 asserts 161 161 161 0 n/a 00:07:19.324 00:07:19.324 Elapsed time = 0.001 seconds 00:07:19.324 10:21:13 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:19.324 00:07:19.324 00:07:19.324 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.324 http://cunit.sourceforge.net/ 00:07:19.324 00:07:19.324 00:07:19.324 Suite: dif 00:07:19.324 Test: dif_generate_and_verify_test ...[2024-07-12 10:21:13.234524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.324 [2024-07-12 10:21:13.235013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.324 [2024-07-12 10:21:13.235297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.324 [2024-07-12 10:21:13.235618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.324 [2024-07-12 10:21:13.235895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.324 [2024-07-12 10:21:13.236174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.324 passed 00:07:19.324 Test: dif_disable_check_test ...[2024-07-12 10:21:13.237185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.324 [2024-07-12 10:21:13.237745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.324 [2024-07-12 10:21:13.238048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.324 passed 00:07:19.324 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-12 10:21:13.239079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:19.324 [2024-07-12 10:21:13.239396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:19.324 [2024-07-12 10:21:13.239700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:19.324 [2024-07-12 10:21:13.240039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:19.324 [2024-07-12 10:21:13.240359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.324 [2024-07-12 10:21:13.240660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.324 [2024-07-12 10:21:13.240961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.324 [2024-07-12 10:21:13.241267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.324 [2024-07-12 10:21:13.241574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.324 [2024-07-12 10:21:13.241891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.324 [2024-07-12 10:21:13.242198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.324 passed 00:07:19.324 Test: dif_apptag_mask_test ...[2024-07-12 10:21:13.242503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:19.324 [2024-07-12 10:21:13.242786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:19.324 passed 00:07:19.324 Test: dif_sec_512_md_0_error_test ...[2024-07-12 10:21:13.242974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.324 passed 00:07:19.324 Test: dif_sec_4096_md_0_error_test ...passed 00:07:19.324 Test: dif_sec_4100_md_128_error_test ...passed 00:07:19.324 Test: dif_guard_seed_test ...passed 00:07:19.324 Test: dif_guard_value_test ...[2024-07-12 10:21:13.243007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.324 [2024-07-12 10:21:13.243036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.324 [2024-07-12 10:21:13.243075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:19.324 [2024-07-12 10:21:13.243101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:19.324 passed 00:07:19.324 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:19.324 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:19.324 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.324 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:19.584 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.584 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 10:21:13.286926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:07:19.584 [2024-07-12 10:21:13.289387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f621, Actual=fe21 00:07:19.584 [2024-07-12 10:21:13.291831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.294264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.296728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.584 [2024-07-12 10:21:13.299157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.584 [2024-07-12 10:21:13.301598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=3fb6 00:07:19.584 [2024-07-12 10:21:13.304021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=4d00 00:07:19.584 [2024-07-12 10:21:13.306446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:07:19.584 [2024-07-12 10:21:13.308876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574e60, Actual=38574660 00:07:19.584 [2024-07-12 10:21:13.311325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.313779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.316231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.584 [2024-07-12 10:21:13.318658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.584 [2024-07-12 10:21:13.321104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=9efb32ba 00:07:19.584 [2024-07-12 10:21:13.323535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=ae07c3e7 00:07:19.584 [2024-07-12 10:21:13.325975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.584 [2024-07-12 10:21:13.328410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.584 [2024-07-12 10:21:13.330831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.333283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.335736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.584 [2024-07-12 10:21:13.338166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.584 [2024-07-12 10:21:13.340627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.584 [2024-07-12 10:21:13.343051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.584 passed 00:07:19.584 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-12 10:21:13.344581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:07:19.584 [2024-07-12 10:21:13.344875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:07:19.584 [2024-07-12 10:21:13.345172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.345469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.584 [2024-07-12 10:21:13.345784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.584 [2024-07-12 10:21:13.346078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.584 [2024-07-12 10:21:13.346371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3fb6 00:07:19.584 [2024-07-12 10:21:13.346640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4d00 00:07:19.584 [2024-07-12 10:21:13.346925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:07:19.585 [2024-07-12 10:21:13.347206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:07:19.585 [2024-07-12 10:21:13.347531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.347831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.348122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.348403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.348692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=9efb32ba 00:07:19.585 [2024-07-12 10:21:13.348961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ae07c3e7 00:07:19.585 [2024-07-12 10:21:13.349263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.585 [2024-07-12 10:21:13.349549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.585 [2024-07-12 10:21:13.349838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.350125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.350418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.350698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.350998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.585 [2024-07-12 10:21:13.351287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.585 passed 00:07:19.585 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-12 10:21:13.351621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:07:19.585 [2024-07-12 10:21:13.351917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:07:19.585 [2024-07-12 10:21:13.352201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.352491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.352790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.353092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.353381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3fb6 00:07:19.585 [2024-07-12 10:21:13.353659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4d00 00:07:19.585 [2024-07-12 10:21:13.353932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:07:19.585 [2024-07-12 10:21:13.354221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:07:19.585 [2024-07-12 10:21:13.354503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.354790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.355076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.355394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.355689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=9efb32ba 00:07:19.585 [2024-07-12 10:21:13.355974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ae07c3e7 00:07:19.585 [2024-07-12 10:21:13.356268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.585 [2024-07-12 10:21:13.356549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.585 [2024-07-12 10:21:13.356838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.357138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.357433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.357714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.358027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.585 [2024-07-12 10:21:13.358300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.585 passed 00:07:19.585 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-12 10:21:13.358624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:07:19.585 [2024-07-12 10:21:13.358927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:07:19.585 [2024-07-12 10:21:13.359220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.359519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.359838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.360125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.360417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3fb6 00:07:19.585 [2024-07-12 10:21:13.360689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4d00 00:07:19.585 [2024-07-12 10:21:13.360972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:07:19.585 [2024-07-12 10:21:13.361267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:07:19.585 [2024-07-12 10:21:13.361577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.361871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.362157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.362450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.362741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=9efb32ba 00:07:19.585 [2024-07-12 10:21:13.363019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ae07c3e7 00:07:19.585 [2024-07-12 10:21:13.363312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.585 [2024-07-12 10:21:13.363635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.585 [2024-07-12 10:21:13.363927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.364222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.364513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.364806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.585 [2024-07-12 10:21:13.365125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.585 [2024-07-12 10:21:13.365409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.585 passed 00:07:19.585 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-12 10:21:13.365728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:07:19.585 [2024-07-12 10:21:13.366014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:07:19.585 [2024-07-12 10:21:13.366307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.366601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.366910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.367194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.367500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3fb6 00:07:19.585 [2024-07-12 10:21:13.367774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4d00 00:07:19.585 passed 00:07:19.585 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-12 10:21:13.368103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:07:19.585 [2024-07-12 10:21:13.368400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:07:19.585 [2024-07-12 10:21:13.368707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.368993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.585 [2024-07-12 10:21:13.369296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.369585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.585 [2024-07-12 10:21:13.369880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=9efb32ba 00:07:19.585 [2024-07-12 10:21:13.370153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ae07c3e7 00:07:19.585 [2024-07-12 10:21:13.370475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.585 [2024-07-12 10:21:13.370777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.586 [2024-07-12 10:21:13.371065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.371378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.371679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.586 [2024-07-12 10:21:13.371974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.586 [2024-07-12 10:21:13.372276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.586 [2024-07-12 10:21:13.372557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.586 passed 00:07:19.586 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-12 10:21:13.372871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:07:19.586 [2024-07-12 10:21:13.373178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:07:19.586 [2024-07-12 10:21:13.373466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.373766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.374077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.586 [2024-07-12 10:21:13.374363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.586 [2024-07-12 10:21:13.374655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3fb6 00:07:19.586 [2024-07-12 10:21:13.374926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4d00 00:07:19.586 passed 00:07:19.586 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-12 10:21:13.375253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:07:19.586 [2024-07-12 10:21:13.375555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:07:19.586 [2024-07-12 10:21:13.375865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.376161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.376455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.586 [2024-07-12 10:21:13.376748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:07:19.586 [2024-07-12 10:21:13.377043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=9efb32ba 00:07:19.586 [2024-07-12 10:21:13.377325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ae07c3e7 00:07:19.586 [2024-07-12 10:21:13.377664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.586 [2024-07-12 10:21:13.377955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:07:19.586 [2024-07-12 10:21:13.378250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.378541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.378834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.586 [2024-07-12 10:21:13.379118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:07:19.586 [2024-07-12 10:21:13.379446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.586 [2024-07-12 10:21:13.379738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5cd2123a74275850 00:07:19.586 passed 00:07:19.586 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.586 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:19.586 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.586 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 10:21:13.423284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:07:19.586 [2024-07-12 10:21:13.424399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1086, Actual=1886 00:07:19.586 [2024-07-12 10:21:13.425508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.426596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.427711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.586 [2024-07-12 10:21:13.428802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.586 [2024-07-12 10:21:13.429900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=3fb6 00:07:19.586 [2024-07-12 10:21:13.430983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=e836 00:07:19.586 [2024-07-12 10:21:13.432094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:07:19.586 [2024-07-12 10:21:13.433201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=b4f97855, Actual=b4f97055 00:07:19.586 [2024-07-12 10:21:13.434312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.435455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.436557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.586 [2024-07-12 10:21:13.437668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.586 [2024-07-12 10:21:13.438763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=9efb32ba 00:07:19.586 [2024-07-12 10:21:13.439878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=935d1db8 00:07:19.586 [2024-07-12 10:21:13.440975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.586 [2024-07-12 10:21:13.442113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a540ae5e6af41aff, Actual=a540ae5e6af412ff 00:07:19.586 [2024-07-12 10:21:13.443207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.444328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.445434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.586 [2024-07-12 10:21:13.446539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.586 [2024-07-12 10:21:13.447649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.586 passed 00:07:19.586 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 10:21:13.448768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=e81a1a91ae802aa4 00:07:19.586 [2024-07-12 10:21:13.449107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:07:19.586 [2024-07-12 10:21:13.449371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:07:19.586 [2024-07-12 10:21:13.449637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.449903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.450185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.586 [2024-07-12 10:21:13.450472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.586 [2024-07-12 10:21:13.450728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3fb6 00:07:19.586 [2024-07-12 10:21:13.450988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=21e1 00:07:19.586 [2024-07-12 10:21:13.451245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab75bed, Actual=1ab753ed 00:07:19.586 [2024-07-12 10:21:13.451533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2a49b007, Actual=2a49b807 00:07:19.586 [2024-07-12 10:21:13.451824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.452092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.452351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.586 [2024-07-12 10:21:13.452614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.586 [2024-07-12 10:21:13.452871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=9efb32ba 00:07:19.586 [2024-07-12 10:21:13.453143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=dedd5ea 00:07:19.586 [2024-07-12 10:21:13.453425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.586 [2024-07-12 10:21:13.453681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d1a14cb3b120ff02, Actual=d1a14cb3b120f702 00:07:19.586 [2024-07-12 10:21:13.453945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.454202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.586 [2024-07-12 10:21:13.454466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:07:19.587 [2024-07-12 10:21:13.454721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:07:19.587 [2024-07-12 10:21:13.455002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.587 [2024-07-12 10:21:13.455265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=9cfbf87c7554cf59 00:07:19.587 passed 00:07:19.587 Test: dix_sec_512_md_0_error ...passed[2024-07-12 10:21:13.455324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.587 00:07:19.587 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:07:19.587 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.587 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:19.587 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.587 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:19.587 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:19.587 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.587 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:19.587 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.587 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 10:21:13.498449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:07:19.587 [2024-07-12 10:21:13.499570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1086, Actual=1886 00:07:19.587 [2024-07-12 10:21:13.500677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.587 [2024-07-12 10:21:13.501775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.587 [2024-07-12 10:21:13.502893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.587 [2024-07-12 10:21:13.504007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.587 [2024-07-12 10:21:13.505111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=3fb6 00:07:19.587 [2024-07-12 10:21:13.506208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=e836 00:07:19.587 [2024-07-12 10:21:13.507292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:07:19.587 [2024-07-12 10:21:13.508413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=b4f97855, Actual=b4f97055 00:07:19.587 [2024-07-12 10:21:13.509533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.587 [2024-07-12 10:21:13.510643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.587 [2024-07-12 10:21:13.511762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.846 [2024-07-12 10:21:13.512863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:07:19.846 [2024-07-12 10:21:13.513972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=9efb32ba 00:07:19.846 [2024-07-12 10:21:13.515074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=935d1db8 00:07:19.846 [2024-07-12 10:21:13.516210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.846 [2024-07-12 10:21:13.517321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a540ae5e6af41aff, Actual=a540ae5e6af412ff 00:07:19.846 [2024-07-12 10:21:13.518415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.519512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.520611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.846 [2024-07-12 10:21:13.521705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=80000000061 00:07:19.846 [2024-07-12 10:21:13.522812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.846 passed 00:07:19.846 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 10:21:13.523922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=e81a1a91ae802aa4 00:07:19.846 [2024-07-12 10:21:13.524266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:07:19.846 [2024-07-12 10:21:13.524533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:07:19.846 [2024-07-12 10:21:13.524799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.525068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.525384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.846 [2024-07-12 10:21:13.525645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.846 [2024-07-12 10:21:13.525908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3fb6 00:07:19.846 [2024-07-12 10:21:13.526163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=21e1 00:07:19.846 [2024-07-12 10:21:13.526426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab75bed, Actual=1ab753ed 00:07:19.846 [2024-07-12 10:21:13.526685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2a49b007, Actual=2a49b807 00:07:19.846 [2024-07-12 10:21:13.526959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.527222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.527488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.846 [2024-07-12 10:21:13.527751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:19.846 [2024-07-12 10:21:13.528008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=9efb32ba 00:07:19.846 [2024-07-12 10:21:13.528270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=dedd5ea 00:07:19.846 [2024-07-12 10:21:13.528536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:07:19.846 [2024-07-12 10:21:13.528797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d1a14cb3b120ff02, Actual=d1a14cb3b120f702 00:07:19.846 [2024-07-12 10:21:13.529048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.529330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:19.846 [2024-07-12 10:21:13.529584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:07:19.846 [2024-07-12 10:21:13.529849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:07:19.846 [2024-07-12 10:21:13.530104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=a709790cb870bc2b 00:07:19.846 [2024-07-12 10:21:13.530362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=9cfbf87c7554cf59 00:07:19.846 passed 00:07:19.846 Test: set_md_interleave_iovs_test ...passed 00:07:19.846 Test: set_md_interleave_iovs_split_test ...passed 00:07:19.846 Test: dif_generate_stream_pi_16_test ...passed 00:07:19.846 Test: dif_generate_stream_test ...passed 00:07:19.846 Test: set_md_interleave_iovs_alignment_test ...[2024-07-12 10:21:13.537701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:19.846 passed 00:07:19.846 Test: dif_generate_split_test ...passed 00:07:19.846 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:19.846 Test: dif_verify_split_test ...passed 00:07:19.846 Test: dif_verify_stream_multi_segments_test ...passed 00:07:19.846 Test: update_crc32c_pi_16_test ...passed 00:07:19.846 Test: update_crc32c_test ...passed 00:07:19.846 Test: dif_update_crc32c_split_test ...passed 00:07:19.846 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:19.846 Test: get_range_with_md_test ...passed 00:07:19.846 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:19.846 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:19.846 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:19.846 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:19.846 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:19.846 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:19.846 Test: dif_generate_and_verify_unmap_test ...passed 00:07:19.846 00:07:19.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.846 suites 1 1 n/a 0 0 00:07:19.846 tests 79 79 79 0 0 00:07:19.846 asserts 3584 3584 3584 0 n/a 00:07:19.846 00:07:19.846 Elapsed time = 0.350 seconds 00:07:19.846 10:21:13 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:19.846 00:07:19.846 00:07:19.846 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.846 http://cunit.sourceforge.net/ 00:07:19.846 00:07:19.846 00:07:19.846 Suite: iov 00:07:19.846 Test: test_single_iov ...passed 00:07:19.846 Test: test_simple_iov ...passed 00:07:19.846 Test: test_complex_iov ...passed 00:07:19.846 Test: test_iovs_to_buf ...passed 00:07:19.846 Test: test_buf_to_iovs ...passed 00:07:19.846 Test: test_memset ...passed 00:07:19.846 Test: test_iov_one ...passed 00:07:19.846 Test: test_iov_xfer ...passed 00:07:19.846 00:07:19.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.846 suites 1 1 n/a 0 0 00:07:19.847 tests 8 8 8 0 0 00:07:19.847 asserts 156 156 156 0 n/a 00:07:19.847 00:07:19.847 Elapsed time = 0.000 seconds 00:07:19.847 10:21:13 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:19.847 00:07:19.847 00:07:19.847 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.847 http://cunit.sourceforge.net/ 00:07:19.847 00:07:19.847 00:07:19.847 Suite: math 00:07:19.847 Test: test_serial_number_arithmetic ...passed 00:07:19.847 Suite: erase 00:07:19.847 Test: test_memset_s ...passed 00:07:19.847 00:07:19.847 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.847 suites 2 2 n/a 0 0 00:07:19.847 tests 2 2 2 0 0 00:07:19.847 asserts 18 18 18 0 n/a 00:07:19.847 00:07:19.847 Elapsed time = 0.000 seconds 00:07:19.847 10:21:13 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:19.847 00:07:19.847 00:07:19.847 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.847 http://cunit.sourceforge.net/ 00:07:19.847 00:07:19.847 00:07:19.847 Suite: pipe 00:07:19.847 Test: test_create_destroy ...passed 00:07:19.847 Test: test_write_get_buffer ...passed 00:07:19.847 Test: test_write_advance ...passed 00:07:19.847 Test: test_read_get_buffer ...passed 00:07:19.847 Test: test_read_advance ...passed 00:07:19.847 Test: test_data ...passed 00:07:19.847 00:07:19.847 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.847 suites 1 1 n/a 0 0 00:07:19.847 tests 6 6 6 0 0 00:07:19.847 asserts 250 250 250 0 n/a 00:07:19.847 00:07:19.847 Elapsed time = 0.000 seconds 00:07:19.847 10:21:13 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:19.847 00:07:19.847 00:07:19.847 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.847 http://cunit.sourceforge.net/ 00:07:19.847 00:07:19.847 00:07:19.847 Suite: xor 00:07:19.847 Test: test_xor_gen ...passed 00:07:19.847 00:07:19.847 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.847 suites 1 1 n/a 0 0 00:07:19.847 tests 1 1 1 0 0 00:07:19.847 asserts 17 17 17 0 n/a 00:07:19.847 00:07:19.847 Elapsed time = 0.006 seconds 00:07:19.847 00:07:19.847 real 0m0.728s 00:07:19.847 user 0m0.511s 00:07:19.847 sys 0m0.222s 00:07:19.847 10:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.847 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:19.847 ************************************ 00:07:19.847 END TEST unittest_util 00:07:19.847 ************************************ 00:07:19.847 10:21:13 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:19.847 10:21:13 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:19.847 10:21:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.847 10:21:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.847 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 START TEST unittest_vhost 00:07:20.105 ************************************ 00:07:20.105 10:21:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:20.105 00:07:20.105 00:07:20.105 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.105 http://cunit.sourceforge.net/ 00:07:20.105 00:07:20.105 00:07:20.105 Suite: vhost_suite 00:07:20.105 Test: desc_to_iov_test ...[2024-07-12 10:21:13.802394] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:20.105 passed 00:07:20.105 Test: create_controller_test ...[2024-07-12 10:21:13.807376] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:20.105 [2024-07-12 10:21:13.807620] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:20.105 [2024-07-12 10:21:13.807859] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:20.105 [2024-07-12 10:21:13.808059] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:20.105 [2024-07-12 10:21:13.808232] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:20.105 [2024-07-12 10:21:13.808470] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-12 10:21:13.809812] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:20.105 passed 00:07:20.105 Test: session_find_by_vid_test ...passed 00:07:20.105 Test: remove_controller_test ...[2024-07-12 10:21:13.812682] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:20.105 passed 00:07:20.105 Test: vq_avail_ring_get_test ...passed 00:07:20.105 Test: vq_packed_ring_test ...passed 00:07:20.105 Test: vhost_blk_construct_test ...passed 00:07:20.105 00:07:20.105 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.105 suites 1 1 n/a 0 0 00:07:20.105 tests 7 7 7 0 0 00:07:20.105 asserts 145 145 145 0 n/a 00:07:20.105 00:07:20.105 Elapsed time = 0.013 seconds 00:07:20.105 00:07:20.105 real 0m0.056s 00:07:20.105 user 0m0.041s 00:07:20.105 sys 0m0.012s 00:07:20.105 10:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.105 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 END TEST unittest_vhost 00:07:20.105 ************************************ 00:07:20.105 10:21:13 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:20.105 10:21:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.105 10:21:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.105 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 START TEST unittest_dma 00:07:20.105 ************************************ 00:07:20.105 10:21:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:20.105 00:07:20.105 00:07:20.105 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.105 http://cunit.sourceforge.net/ 00:07:20.105 00:07:20.105 00:07:20.105 Suite: dma_suite 00:07:20.105 Test: test_dma ...[2024-07-12 10:21:13.902450] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:20.105 passed 00:07:20.105 00:07:20.105 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.105 suites 1 1 n/a 0 0 00:07:20.105 tests 1 1 1 0 0 00:07:20.105 asserts 50 50 50 0 n/a 00:07:20.105 00:07:20.105 Elapsed time = 0.000 seconds 00:07:20.105 00:07:20.105 real 0m0.031s 00:07:20.105 user 0m0.017s 00:07:20.105 sys 0m0.014s 00:07:20.105 10:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.105 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 END TEST unittest_dma 00:07:20.105 ************************************ 00:07:20.105 10:21:13 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:20.105 10:21:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.105 10:21:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.105 10:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 START TEST unittest_init 00:07:20.105 ************************************ 00:07:20.105 10:21:13 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:20.105 10:21:13 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:20.105 00:07:20.105 00:07:20.105 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.105 http://cunit.sourceforge.net/ 00:07:20.105 00:07:20.105 00:07:20.105 Suite: subsystem_suite 00:07:20.105 Test: subsystem_sort_test_depends_on_single ...passed 00:07:20.105 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:20.105 Test: subsystem_sort_test_missing_dependency ...[2024-07-12 10:21:13.992157] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:20.105 [2024-07-12 10:21:13.992567] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:20.105 passed 00:07:20.105 00:07:20.105 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.105 suites 1 1 n/a 0 0 00:07:20.105 tests 3 3 3 0 0 00:07:20.105 asserts 20 20 20 0 n/a 00:07:20.105 00:07:20.105 Elapsed time = 0.001 seconds 00:07:20.105 00:07:20.105 real 0m0.040s 00:07:20.105 user 0m0.027s 00:07:20.105 sys 0m0.012s 00:07:20.105 10:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.105 10:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:20.105 ************************************ 00:07:20.105 END TEST unittest_init 00:07:20.105 ************************************ 00:07:20.363 10:21:14 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:20.363 10:21:14 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:20.363 10:21:14 -- unit/unittest.sh@290 -- # hostname 00:07:20.363 10:21:14 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:20.363 geninfo: WARNING: invalid characters removed from testname! 00:07:52.427 10:21:41 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:52.427 10:21:45 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:55.713 10:21:48 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:58.239 10:21:51 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:00.766 10:21:54 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:04.050 10:21:57 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:06.579 10:21:59 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:09.108 10:22:02 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:09.108 10:22:02 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:09.365 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:09.365 Found 309 entries. 00:08:09.365 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:09.365 Writing .css and .png files. 00:08:09.365 Generating output. 00:08:09.365 Processing file include/linux/virtio_ring.h 00:08:09.622 Processing file include/spdk/thread.h 00:08:09.622 Processing file include/spdk/trace.h 00:08:09.622 Processing file include/spdk/bdev_module.h 00:08:09.622 Processing file include/spdk/endian.h 00:08:09.622 Processing file include/spdk/base64.h 00:08:09.622 Processing file include/spdk/histogram_data.h 00:08:09.622 Processing file include/spdk/mmio.h 00:08:09.622 Processing file include/spdk/nvme_spec.h 00:08:09.622 Processing file include/spdk/nvme.h 00:08:09.622 Processing file include/spdk/nvmf_transport.h 00:08:09.622 Processing file include/spdk/util.h 00:08:09.880 Processing file include/spdk_internal/sock.h 00:08:09.880 Processing file include/spdk_internal/utf.h 00:08:09.880 Processing file include/spdk_internal/virtio.h 00:08:09.880 Processing file include/spdk_internal/rdma.h 00:08:09.880 Processing file include/spdk_internal/nvme_tcp.h 00:08:09.880 Processing file include/spdk_internal/sgl.h 00:08:10.138 Processing file lib/accel/accel_sw.c 00:08:10.138 Processing file lib/accel/accel.c 00:08:10.138 Processing file lib/accel/accel_rpc.c 00:08:10.396 Processing file lib/bdev/bdev.c 00:08:10.396 Processing file lib/bdev/bdev_rpc.c 00:08:10.396 Processing file lib/bdev/part.c 00:08:10.396 Processing file lib/bdev/scsi_nvme.c 00:08:10.396 Processing file lib/bdev/bdev_zone.c 00:08:10.654 Processing file lib/blob/zeroes.c 00:08:10.654 Processing file lib/blob/blobstore.c 00:08:10.654 Processing file lib/blob/blob_bs_dev.c 00:08:10.654 Processing file lib/blob/blobstore.h 00:08:10.654 Processing file lib/blob/request.c 00:08:10.654 Processing file lib/blobfs/blobfs.c 00:08:10.654 Processing file lib/blobfs/tree.c 00:08:10.654 Processing file lib/conf/conf.c 00:08:10.913 Processing file lib/dma/dma.c 00:08:11.171 Processing file lib/env_dpdk/env.c 00:08:11.171 Processing file lib/env_dpdk/pci.c 00:08:11.171 Processing file lib/env_dpdk/pci_event.c 00:08:11.171 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:11.171 Processing file lib/env_dpdk/pci_virtio.c 00:08:11.171 Processing file lib/env_dpdk/pci_idxd.c 00:08:11.171 Processing file lib/env_dpdk/memory.c 00:08:11.171 Processing file lib/env_dpdk/sigbus_handler.c 00:08:11.171 Processing file lib/env_dpdk/pci_ioat.c 00:08:11.171 Processing file lib/env_dpdk/threads.c 00:08:11.171 Processing file lib/env_dpdk/init.c 00:08:11.171 Processing file lib/env_dpdk/pci_vmd.c 00:08:11.171 Processing file lib/env_dpdk/pci_dpdk.c 00:08:11.171 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:11.171 Processing file lib/event/app.c 00:08:11.171 Processing file lib/event/app_rpc.c 00:08:11.171 Processing file lib/event/scheduler_static.c 00:08:11.171 Processing file lib/event/log_rpc.c 00:08:11.171 Processing file lib/event/reactor.c 00:08:11.738 Processing file lib/ftl/ftl_io.h 00:08:11.738 Processing file lib/ftl/ftl_nv_cache.h 00:08:11.738 Processing file lib/ftl/ftl_init.c 00:08:11.738 Processing file lib/ftl/ftl_l2p_flat.c 00:08:11.738 Processing file lib/ftl/ftl_band.h 00:08:11.738 Processing file lib/ftl/ftl_io.c 00:08:11.738 Processing file lib/ftl/ftl_band_ops.c 00:08:11.738 Processing file lib/ftl/ftl_writer.c 00:08:11.738 Processing file lib/ftl/ftl_trace.c 00:08:11.738 Processing file lib/ftl/ftl_debug.c 00:08:11.738 Processing file lib/ftl/ftl_writer.h 00:08:11.738 Processing file lib/ftl/ftl_core.h 00:08:11.738 Processing file lib/ftl/ftl_reloc.c 00:08:11.738 Processing file lib/ftl/ftl_sb.c 00:08:11.738 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:11.738 Processing file lib/ftl/ftl_l2p.c 00:08:11.738 Processing file lib/ftl/ftl_p2l.c 00:08:11.738 Processing file lib/ftl/ftl_nv_cache.c 00:08:11.738 Processing file lib/ftl/ftl_band.c 00:08:11.738 Processing file lib/ftl/ftl_debug.h 00:08:11.738 Processing file lib/ftl/ftl_layout.c 00:08:11.738 Processing file lib/ftl/ftl_rq.c 00:08:11.738 Processing file lib/ftl/ftl_core.c 00:08:11.738 Processing file lib/ftl/ftl_l2p_cache.c 00:08:11.738 Processing file lib/ftl/base/ftl_base_dev.c 00:08:11.738 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:11.996 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:11.996 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:11.996 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:12.255 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:12.255 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:12.255 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:12.255 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:12.255 Processing file lib/ftl/utils/ftl_df.h 00:08:12.255 Processing file lib/ftl/utils/ftl_property.c 00:08:12.255 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:12.255 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:12.255 Processing file lib/ftl/utils/ftl_mempool.c 00:08:12.255 Processing file lib/ftl/utils/ftl_conf.c 00:08:12.255 Processing file lib/ftl/utils/ftl_property.h 00:08:12.255 Processing file lib/ftl/utils/ftl_md.c 00:08:12.255 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:12.513 Processing file lib/idxd/idxd_user.c 00:08:12.513 Processing file lib/idxd/idxd_internal.h 00:08:12.513 Processing file lib/idxd/idxd.c 00:08:12.513 Processing file lib/init/rpc.c 00:08:12.513 Processing file lib/init/json_config.c 00:08:12.513 Processing file lib/init/subsystem.c 00:08:12.513 Processing file lib/init/subsystem_rpc.c 00:08:12.771 Processing file lib/ioat/ioat.c 00:08:12.771 Processing file lib/ioat/ioat_internal.h 00:08:13.029 Processing file lib/iscsi/portal_grp.c 00:08:13.029 Processing file lib/iscsi/init_grp.c 00:08:13.029 Processing file lib/iscsi/iscsi.c 00:08:13.029 Processing file lib/iscsi/conn.c 00:08:13.029 Processing file lib/iscsi/iscsi_subsystem.c 00:08:13.029 Processing file lib/iscsi/md5.c 00:08:13.029 Processing file lib/iscsi/task.c 00:08:13.029 Processing file lib/iscsi/param.c 00:08:13.029 Processing file lib/iscsi/iscsi.h 00:08:13.029 Processing file lib/iscsi/task.h 00:08:13.029 Processing file lib/iscsi/tgt_node.c 00:08:13.029 Processing file lib/iscsi/iscsi_rpc.c 00:08:13.287 Processing file lib/json/json_parse.c 00:08:13.287 Processing file lib/json/json_util.c 00:08:13.287 Processing file lib/json/json_write.c 00:08:13.287 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:13.287 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:13.287 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:13.287 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:13.288 Processing file lib/log/log.c 00:08:13.288 Processing file lib/log/log_flags.c 00:08:13.288 Processing file lib/log/log_deprecated.c 00:08:13.545 Processing file lib/lvol/lvol.c 00:08:13.545 Processing file lib/nbd/nbd.c 00:08:13.546 Processing file lib/nbd/nbd_rpc.c 00:08:13.546 Processing file lib/notify/notify.c 00:08:13.546 Processing file lib/notify/notify_rpc.c 00:08:14.499 Processing file lib/nvme/nvme_discovery.c 00:08:14.499 Processing file lib/nvme/nvme_cuse.c 00:08:14.499 Processing file lib/nvme/nvme_ns_cmd.c 00:08:14.499 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:14.499 Processing file lib/nvme/nvme_fabric.c 00:08:14.499 Processing file lib/nvme/nvme_vfio_user.c 00:08:14.499 Processing file lib/nvme/nvme_transport.c 00:08:14.499 Processing file lib/nvme/nvme_pcie_internal.h 00:08:14.499 Processing file lib/nvme/nvme_rdma.c 00:08:14.499 Processing file lib/nvme/nvme_pcie.c 00:08:14.499 Processing file lib/nvme/nvme_io_msg.c 00:08:14.499 Processing file lib/nvme/nvme_opal.c 00:08:14.499 Processing file lib/nvme/nvme_qpair.c 00:08:14.499 Processing file lib/nvme/nvme_quirks.c 00:08:14.499 Processing file lib/nvme/nvme_tcp.c 00:08:14.499 Processing file lib/nvme/nvme.c 00:08:14.499 Processing file lib/nvme/nvme_ns.c 00:08:14.499 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:14.499 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:14.499 Processing file lib/nvme/nvme_zns.c 00:08:14.499 Processing file lib/nvme/nvme_ctrlr.c 00:08:14.499 Processing file lib/nvme/nvme_internal.h 00:08:14.499 Processing file lib/nvme/nvme_poll_group.c 00:08:14.499 Processing file lib/nvme/nvme_pcie_common.c 00:08:14.757 Processing file lib/nvmf/nvmf_internal.h 00:08:14.757 Processing file lib/nvmf/rdma.c 00:08:14.757 Processing file lib/nvmf/subsystem.c 00:08:14.757 Processing file lib/nvmf/nvmf.c 00:08:14.757 Processing file lib/nvmf/tcp.c 00:08:14.757 Processing file lib/nvmf/ctrlr_discovery.c 00:08:14.757 Processing file lib/nvmf/transport.c 00:08:14.757 Processing file lib/nvmf/ctrlr.c 00:08:14.757 Processing file lib/nvmf/ctrlr_bdev.c 00:08:14.757 Processing file lib/nvmf/nvmf_rpc.c 00:08:15.015 Processing file lib/rdma/rdma_verbs.c 00:08:15.015 Processing file lib/rdma/common.c 00:08:15.015 Processing file lib/rpc/rpc.c 00:08:15.273 Processing file lib/scsi/lun.c 00:08:15.273 Processing file lib/scsi/scsi_pr.c 00:08:15.273 Processing file lib/scsi/task.c 00:08:15.273 Processing file lib/scsi/scsi.c 00:08:15.273 Processing file lib/scsi/port.c 00:08:15.273 Processing file lib/scsi/dev.c 00:08:15.273 Processing file lib/scsi/scsi_rpc.c 00:08:15.273 Processing file lib/scsi/scsi_bdev.c 00:08:15.273 Processing file lib/sock/sock.c 00:08:15.273 Processing file lib/sock/sock_rpc.c 00:08:15.531 Processing file lib/thread/thread.c 00:08:15.531 Processing file lib/thread/iobuf.c 00:08:15.531 Processing file lib/trace/trace_flags.c 00:08:15.531 Processing file lib/trace/trace_rpc.c 00:08:15.531 Processing file lib/trace/trace.c 00:08:15.531 Processing file lib/trace_parser/trace.cpp 00:08:15.788 Processing file lib/ut/ut.c 00:08:15.788 Processing file lib/ut_mock/mock.c 00:08:16.046 Processing file lib/util/xor.c 00:08:16.046 Processing file lib/util/hexlify.c 00:08:16.046 Processing file lib/util/base64.c 00:08:16.046 Processing file lib/util/fd_group.c 00:08:16.046 Processing file lib/util/strerror_tls.c 00:08:16.046 Processing file lib/util/cpuset.c 00:08:16.046 Processing file lib/util/crc32.c 00:08:16.046 Processing file lib/util/crc64.c 00:08:16.046 Processing file lib/util/fd.c 00:08:16.046 Processing file lib/util/uuid.c 00:08:16.046 Processing file lib/util/iov.c 00:08:16.046 Processing file lib/util/crc32c.c 00:08:16.046 Processing file lib/util/file.c 00:08:16.046 Processing file lib/util/crc16.c 00:08:16.046 Processing file lib/util/zipf.c 00:08:16.046 Processing file lib/util/pipe.c 00:08:16.046 Processing file lib/util/bit_array.c 00:08:16.046 Processing file lib/util/crc32_ieee.c 00:08:16.046 Processing file lib/util/dif.c 00:08:16.046 Processing file lib/util/string.c 00:08:16.046 Processing file lib/util/math.c 00:08:16.305 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:16.305 Processing file lib/vfio_user/host/vfio_user.c 00:08:16.563 Processing file lib/vhost/vhost_blk.c 00:08:16.563 Processing file lib/vhost/vhost.c 00:08:16.563 Processing file lib/vhost/vhost_internal.h 00:08:16.563 Processing file lib/vhost/rte_vhost_user.c 00:08:16.563 Processing file lib/vhost/vhost_rpc.c 00:08:16.563 Processing file lib/vhost/vhost_scsi.c 00:08:16.563 Processing file lib/virtio/virtio_vfio_user.c 00:08:16.563 Processing file lib/virtio/virtio_vhost_user.c 00:08:16.563 Processing file lib/virtio/virtio_pci.c 00:08:16.563 Processing file lib/virtio/virtio.c 00:08:16.563 Processing file lib/vmd/led.c 00:08:16.563 Processing file lib/vmd/vmd.c 00:08:16.821 Processing file module/accel/dsa/accel_dsa.c 00:08:16.821 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:16.821 Processing file module/accel/error/accel_error.c 00:08:16.821 Processing file module/accel/error/accel_error_rpc.c 00:08:16.821 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:16.821 Processing file module/accel/iaa/accel_iaa.c 00:08:16.821 Processing file module/accel/ioat/accel_ioat.c 00:08:16.821 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:17.079 Processing file module/bdev/aio/bdev_aio.c 00:08:17.079 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:17.079 Processing file module/bdev/delay/vbdev_delay.c 00:08:17.079 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:17.079 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:17.079 Processing file module/bdev/error/vbdev_error.c 00:08:17.337 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:17.337 Processing file module/bdev/ftl/bdev_ftl.c 00:08:17.337 Processing file module/bdev/gpt/gpt.c 00:08:17.337 Processing file module/bdev/gpt/gpt.h 00:08:17.337 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:17.595 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:17.595 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:17.595 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:17.595 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:17.854 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:17.854 Processing file module/bdev/malloc/bdev_malloc.c 00:08:17.854 Processing file module/bdev/null/bdev_null.c 00:08:17.854 Processing file module/bdev/null/bdev_null_rpc.c 00:08:18.112 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:18.112 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:18.112 Processing file module/bdev/nvme/nvme_rpc.c 00:08:18.112 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:18.112 Processing file module/bdev/nvme/bdev_nvme.c 00:08:18.112 Processing file module/bdev/nvme/vbdev_opal.c 00:08:18.112 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:18.369 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:18.369 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:18.627 Processing file module/bdev/raid/concat.c 00:08:18.627 Processing file module/bdev/raid/raid0.c 00:08:18.627 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:18.627 Processing file module/bdev/raid/raid5f.c 00:08:18.627 Processing file module/bdev/raid/bdev_raid.h 00:08:18.627 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:18.627 Processing file module/bdev/raid/raid1.c 00:08:18.627 Processing file module/bdev/raid/bdev_raid.c 00:08:18.627 Processing file module/bdev/split/vbdev_split.c 00:08:18.627 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:18.885 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:18.885 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:18.885 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:18.885 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:18.885 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:18.885 Processing file module/blob/bdev/blob_bdev.c 00:08:19.144 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:19.144 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:19.144 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:19.144 Processing file module/event/subsystems/accel/accel.c 00:08:19.144 Processing file module/event/subsystems/bdev/bdev.c 00:08:19.402 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:19.402 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:19.402 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:19.402 Processing file module/event/subsystems/nbd/nbd.c 00:08:19.402 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:19.402 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:19.659 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:19.659 Processing file module/event/subsystems/scsi/scsi.c 00:08:19.659 Processing file module/event/subsystems/sock/sock.c 00:08:19.659 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:19.916 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:19.916 Processing file module/event/subsystems/vmd/vmd.c 00:08:19.916 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:19.916 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:19.916 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:20.174 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:20.174 Processing file module/sock/sock_kernel.h 00:08:20.174 Processing file module/sock/posix/posix.c 00:08:20.174 Writing directory view page. 00:08:20.174 Overall coverage rate: 00:08:20.174 lines......: 39.1% (39263 of 100392 lines) 00:08:20.174 functions..: 42.8% (3587 of 8384 functions) 00:08:20.174 00:08:20.174 00:08:20.174 ===================== 00:08:20.174 All unit tests passed 00:08:20.174 ===================== 00:08:20.174 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:20.174 10:22:14 -- unit/unittest.sh@302 -- # set +x 00:08:20.174 00:08:20.174 00:08:20.174 ************************************ 00:08:20.174 END TEST unittest 00:08:20.174 ************************************ 00:08:20.174 00:08:20.174 real 3m12.792s 00:08:20.174 user 2m46.844s 00:08:20.174 sys 0m14.853s 00:08:20.174 10:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.174 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.174 10:22:14 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:20.174 10:22:14 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:20.174 10:22:14 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:20.174 10:22:14 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:20.174 10:22:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:20.174 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.174 10:22:14 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:20.174 10:22:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.174 10:22:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.174 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.174 ************************************ 00:08:20.174 START TEST env 00:08:20.174 ************************************ 00:08:20.174 10:22:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:20.431 * Looking for test storage... 00:08:20.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:20.431 10:22:14 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:20.431 10:22:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.431 10:22:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.431 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.431 ************************************ 00:08:20.431 START TEST env_memory 00:08:20.431 ************************************ 00:08:20.431 10:22:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:20.431 00:08:20.431 00:08:20.431 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.431 http://cunit.sourceforge.net/ 00:08:20.431 00:08:20.431 00:08:20.431 Suite: memory 00:08:20.431 Test: alloc and free memory map ...[2024-07-12 10:22:14.217312] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:20.431 passed 00:08:20.431 Test: mem map translation ...[2024-07-12 10:22:14.264465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:20.431 [2024-07-12 10:22:14.264692] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:20.431 [2024-07-12 10:22:14.264907] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:20.431 [2024-07-12 10:22:14.265090] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:20.431 passed 00:08:20.431 Test: mem map registration ...[2024-07-12 10:22:14.351058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:20.431 [2024-07-12 10:22:14.351272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:20.689 passed 00:08:20.689 Test: mem map adjacent registrations ...passed 00:08:20.689 00:08:20.689 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.689 suites 1 1 n/a 0 0 00:08:20.689 tests 4 4 4 0 0 00:08:20.689 asserts 152 152 152 0 n/a 00:08:20.689 00:08:20.689 Elapsed time = 0.290 seconds 00:08:20.689 ************************************ 00:08:20.689 END TEST env_memory 00:08:20.689 ************************************ 00:08:20.689 00:08:20.689 real 0m0.321s 00:08:20.689 user 0m0.295s 00:08:20.689 sys 0m0.025s 00:08:20.689 10:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.689 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.689 10:22:14 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:20.689 10:22:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.689 10:22:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.689 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.689 ************************************ 00:08:20.689 START TEST env_vtophys 00:08:20.689 ************************************ 00:08:20.689 10:22:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:20.689 EAL: lib.eal log level changed from notice to debug 00:08:20.689 EAL: Detected lcore 0 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 1 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 2 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 3 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 4 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 5 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 6 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 7 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 8 as core 0 on socket 0 00:08:20.689 EAL: Detected lcore 9 as core 0 on socket 0 00:08:20.689 EAL: Maximum logical cores by configuration: 128 00:08:20.689 EAL: Detected CPU lcores: 10 00:08:20.689 EAL: Detected NUMA nodes: 1 00:08:20.689 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:20.689 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:20.689 EAL: Checking presence of .so 'librte_eal.so' 00:08:20.689 EAL: Detected static linkage of DPDK 00:08:20.689 EAL: No shared files mode enabled, IPC will be disabled 00:08:20.689 EAL: Selected IOVA mode 'PA' 00:08:20.689 EAL: Probing VFIO support... 00:08:20.689 EAL: IOMMU type 1 (Type 1) is supported 00:08:20.689 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:20.689 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:20.689 EAL: VFIO support initialized 00:08:20.689 EAL: Ask a virtual area of 0x2e000 bytes 00:08:20.689 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:20.689 EAL: Setting up physically contiguous memory... 00:08:20.689 EAL: Setting maximum number of open files to 1048576 00:08:20.689 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:20.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:20.689 EAL: Ask a virtual area of 0x61000 bytes 00:08:20.689 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:20.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:20.689 EAL: Ask a virtual area of 0x400000000 bytes 00:08:20.689 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:20.689 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:20.689 EAL: Ask a virtual area of 0x61000 bytes 00:08:20.689 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:20.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:20.689 EAL: Ask a virtual area of 0x400000000 bytes 00:08:20.689 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:20.689 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:20.689 EAL: Ask a virtual area of 0x61000 bytes 00:08:20.689 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:20.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:20.689 EAL: Ask a virtual area of 0x400000000 bytes 00:08:20.689 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:20.689 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:20.689 EAL: Ask a virtual area of 0x61000 bytes 00:08:20.689 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:20.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:20.689 EAL: Ask a virtual area of 0x400000000 bytes 00:08:20.689 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:20.689 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:20.689 EAL: Hugepages will be freed exactly as allocated. 00:08:20.689 EAL: No shared files mode enabled, IPC is disabled 00:08:20.689 EAL: No shared files mode enabled, IPC is disabled 00:08:20.947 EAL: TSC frequency is ~2200000 KHz 00:08:20.947 EAL: Main lcore 0 is ready (tid=7fbcd0d7fa40;cpuset=[0]) 00:08:20.947 EAL: Trying to obtain current memory policy. 00:08:20.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:20.947 EAL: Restoring previous memory policy: 0 00:08:20.947 EAL: request: mp_malloc_sync 00:08:20.947 EAL: No shared files mode enabled, IPC is disabled 00:08:20.947 EAL: Heap on socket 0 was expanded by 2MB 00:08:20.947 EAL: No shared files mode enabled, IPC is disabled 00:08:20.947 EAL: Mem event callback 'spdk:(nil)' registered 00:08:20.947 00:08:20.947 00:08:20.947 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.947 http://cunit.sourceforge.net/ 00:08:20.947 00:08:20.947 00:08:20.947 Suite: components_suite 00:08:21.514 Test: vtophys_malloc_test ...passed 00:08:21.514 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:21.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.514 EAL: Restoring previous memory policy: 0 00:08:21.514 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.514 EAL: request: mp_malloc_sync 00:08:21.514 EAL: No shared files mode enabled, IPC is disabled 00:08:21.514 EAL: Heap on socket 0 was expanded by 4MB 00:08:21.514 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.514 EAL: request: mp_malloc_sync 00:08:21.514 EAL: No shared files mode enabled, IPC is disabled 00:08:21.514 EAL: Heap on socket 0 was shrunk by 4MB 00:08:21.514 EAL: Trying to obtain current memory policy. 00:08:21.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.514 EAL: Restoring previous memory policy: 0 00:08:21.514 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.514 EAL: request: mp_malloc_sync 00:08:21.514 EAL: No shared files mode enabled, IPC is disabled 00:08:21.514 EAL: Heap on socket 0 was expanded by 6MB 00:08:21.514 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.514 EAL: request: mp_malloc_sync 00:08:21.514 EAL: No shared files mode enabled, IPC is disabled 00:08:21.514 EAL: Heap on socket 0 was shrunk by 6MB 00:08:21.514 EAL: Trying to obtain current memory policy. 00:08:21.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.514 EAL: Restoring previous memory policy: 0 00:08:21.514 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.514 EAL: request: mp_malloc_sync 00:08:21.514 EAL: No shared files mode enabled, IPC is disabled 00:08:21.514 EAL: Heap on socket 0 was expanded by 10MB 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was shrunk by 10MB 00:08:21.515 EAL: Trying to obtain current memory policy. 00:08:21.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.515 EAL: Restoring previous memory policy: 0 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was expanded by 18MB 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was shrunk by 18MB 00:08:21.515 EAL: Trying to obtain current memory policy. 00:08:21.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.515 EAL: Restoring previous memory policy: 0 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was expanded by 34MB 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was shrunk by 34MB 00:08:21.515 EAL: Trying to obtain current memory policy. 00:08:21.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.515 EAL: Restoring previous memory policy: 0 00:08:21.515 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.515 EAL: request: mp_malloc_sync 00:08:21.515 EAL: No shared files mode enabled, IPC is disabled 00:08:21.515 EAL: Heap on socket 0 was expanded by 66MB 00:08:21.773 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.773 EAL: request: mp_malloc_sync 00:08:21.773 EAL: No shared files mode enabled, IPC is disabled 00:08:21.773 EAL: Heap on socket 0 was shrunk by 66MB 00:08:21.773 EAL: Trying to obtain current memory policy. 00:08:21.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.773 EAL: Restoring previous memory policy: 0 00:08:21.773 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.773 EAL: request: mp_malloc_sync 00:08:21.773 EAL: No shared files mode enabled, IPC is disabled 00:08:21.773 EAL: Heap on socket 0 was expanded by 130MB 00:08:22.031 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.031 EAL: request: mp_malloc_sync 00:08:22.031 EAL: No shared files mode enabled, IPC is disabled 00:08:22.031 EAL: Heap on socket 0 was shrunk by 130MB 00:08:22.289 EAL: Trying to obtain current memory policy. 00:08:22.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.289 EAL: Restoring previous memory policy: 0 00:08:22.289 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.289 EAL: request: mp_malloc_sync 00:08:22.289 EAL: No shared files mode enabled, IPC is disabled 00:08:22.289 EAL: Heap on socket 0 was expanded by 258MB 00:08:22.877 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.877 EAL: request: mp_malloc_sync 00:08:22.877 EAL: No shared files mode enabled, IPC is disabled 00:08:22.877 EAL: Heap on socket 0 was shrunk by 258MB 00:08:23.172 EAL: Trying to obtain current memory policy. 00:08:23.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.430 EAL: Restoring previous memory policy: 0 00:08:23.430 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.430 EAL: request: mp_malloc_sync 00:08:23.430 EAL: No shared files mode enabled, IPC is disabled 00:08:23.430 EAL: Heap on socket 0 was expanded by 514MB 00:08:24.364 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.364 EAL: request: mp_malloc_sync 00:08:24.364 EAL: No shared files mode enabled, IPC is disabled 00:08:24.364 EAL: Heap on socket 0 was shrunk by 514MB 00:08:24.931 EAL: Trying to obtain current memory policy. 00:08:24.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.189 EAL: Restoring previous memory policy: 0 00:08:25.189 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.189 EAL: request: mp_malloc_sync 00:08:25.189 EAL: No shared files mode enabled, IPC is disabled 00:08:25.189 EAL: Heap on socket 0 was expanded by 1026MB 00:08:27.090 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.090 EAL: request: mp_malloc_sync 00:08:27.090 EAL: No shared files mode enabled, IPC is disabled 00:08:27.090 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:28.989 passed 00:08:28.989 00:08:28.989 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.989 suites 1 1 n/a 0 0 00:08:28.989 tests 2 2 2 0 0 00:08:28.989 asserts 6496 6496 6496 0 n/a 00:08:28.989 00:08:28.989 Elapsed time = 7.641 seconds 00:08:28.989 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.989 EAL: request: mp_malloc_sync 00:08:28.989 EAL: No shared files mode enabled, IPC is disabled 00:08:28.989 EAL: Heap on socket 0 was shrunk by 2MB 00:08:28.989 EAL: No shared files mode enabled, IPC is disabled 00:08:28.989 EAL: No shared files mode enabled, IPC is disabled 00:08:28.989 EAL: No shared files mode enabled, IPC is disabled 00:08:28.989 ************************************ 00:08:28.989 END TEST env_vtophys 00:08:28.989 ************************************ 00:08:28.989 00:08:28.989 real 0m7.960s 00:08:28.989 user 0m6.805s 00:08:28.989 sys 0m1.000s 00:08:28.989 10:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.989 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.989 10:22:22 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:28.989 10:22:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.989 10:22:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.989 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.989 ************************************ 00:08:28.989 START TEST env_pci 00:08:28.989 ************************************ 00:08:28.989 10:22:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:28.989 00:08:28.989 00:08:28.989 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.989 http://cunit.sourceforge.net/ 00:08:28.989 00:08:28.989 00:08:28.989 Suite: pci 00:08:28.989 Test: pci_hook ...[2024-07-12 10:22:22.571917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 105092 has claimed it 00:08:28.989 EAL: Cannot find device (10000:00:01.0) 00:08:28.989 EAL: Failed to attach device on primary process 00:08:28.989 passed 00:08:28.989 00:08:28.989 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.989 suites 1 1 n/a 0 0 00:08:28.989 tests 1 1 1 0 0 00:08:28.989 asserts 25 25 25 0 n/a 00:08:28.989 00:08:28.989 Elapsed time = 0.007 seconds 00:08:28.989 ************************************ 00:08:28.989 END TEST env_pci 00:08:28.989 ************************************ 00:08:28.989 00:08:28.989 real 0m0.095s 00:08:28.989 user 0m0.059s 00:08:28.989 sys 0m0.036s 00:08:28.989 10:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.989 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.989 10:22:22 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:28.989 10:22:22 -- env/env.sh@15 -- # uname 00:08:28.989 10:22:22 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:28.989 10:22:22 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:28.989 10:22:22 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:28.989 10:22:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:28.989 10:22:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.989 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:28.989 ************************************ 00:08:28.989 START TEST env_dpdk_post_init 00:08:28.989 ************************************ 00:08:28.989 10:22:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:28.989 EAL: Detected CPU lcores: 10 00:08:28.989 EAL: Detected NUMA nodes: 1 00:08:28.989 EAL: Detected static linkage of DPDK 00:08:28.989 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:28.989 EAL: Selected IOVA mode 'PA' 00:08:28.989 EAL: VFIO support initialized 00:08:28.989 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:28.989 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:29.247 Starting DPDK initialization... 00:08:29.247 Starting SPDK post initialization... 00:08:29.247 SPDK NVMe probe 00:08:29.247 Attaching to 0000:00:06.0 00:08:29.247 Attached to 0000:00:06.0 00:08:29.247 Cleaning up... 00:08:29.247 00:08:29.247 real 0m0.282s 00:08:29.247 user 0m0.092s 00:08:29.247 sys 0m0.092s 00:08:29.247 ************************************ 00:08:29.247 END TEST env_dpdk_post_init 00:08:29.247 ************************************ 00:08:29.247 10:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.247 10:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.247 10:22:23 -- env/env.sh@26 -- # uname 00:08:29.247 10:22:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:29.247 10:22:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.247 10:22:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.247 10:22:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.247 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.247 ************************************ 00:08:29.247 START TEST env_mem_callbacks 00:08:29.247 ************************************ 00:08:29.247 10:22:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.247 EAL: Detected CPU lcores: 10 00:08:29.247 EAL: Detected NUMA nodes: 1 00:08:29.247 EAL: Detected static linkage of DPDK 00:08:29.247 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.247 EAL: Selected IOVA mode 'PA' 00:08:29.247 EAL: VFIO support initialized 00:08:29.506 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.506 00:08:29.506 00:08:29.506 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.506 http://cunit.sourceforge.net/ 00:08:29.506 00:08:29.506 00:08:29.506 Suite: memory 00:08:29.506 Test: test ... 00:08:29.506 register 0x200000200000 2097152 00:08:29.506 malloc 3145728 00:08:29.506 register 0x200000400000 4194304 00:08:29.506 buf 0x2000004fffc0 len 3145728 PASSED 00:08:29.506 malloc 64 00:08:29.506 buf 0x2000004ffec0 len 64 PASSED 00:08:29.506 malloc 4194304 00:08:29.506 register 0x200000800000 6291456 00:08:29.506 buf 0x2000009fffc0 len 4194304 PASSED 00:08:29.506 free 0x2000004fffc0 3145728 00:08:29.506 free 0x2000004ffec0 64 00:08:29.506 unregister 0x200000400000 4194304 PASSED 00:08:29.506 free 0x2000009fffc0 4194304 00:08:29.506 unregister 0x200000800000 6291456 PASSED 00:08:29.506 malloc 8388608 00:08:29.506 register 0x200000400000 10485760 00:08:29.506 buf 0x2000005fffc0 len 8388608 PASSED 00:08:29.506 free 0x2000005fffc0 8388608 00:08:29.506 unregister 0x200000400000 10485760 PASSED 00:08:29.506 passed 00:08:29.506 00:08:29.506 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.506 suites 1 1 n/a 0 0 00:08:29.506 tests 1 1 1 0 0 00:08:29.506 asserts 15 15 15 0 n/a 00:08:29.506 00:08:29.506 Elapsed time = 0.057 seconds 00:08:29.506 00:08:29.506 real 0m0.285s 00:08:29.506 user 0m0.111s 00:08:29.506 sys 0m0.073s 00:08:29.506 10:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.506 ************************************ 00:08:29.506 END TEST env_mem_callbacks 00:08:29.506 ************************************ 00:08:29.506 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.506 00:08:29.506 real 0m9.266s 00:08:29.506 user 0m7.562s 00:08:29.506 sys 0m1.341s 00:08:29.506 ************************************ 00:08:29.506 END TEST env 00:08:29.506 ************************************ 00:08:29.506 10:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.506 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.506 10:22:23 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:29.506 10:22:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.506 10:22:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.506 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.506 ************************************ 00:08:29.506 START TEST rpc 00:08:29.506 ************************************ 00:08:29.506 10:22:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:29.764 * Looking for test storage... 00:08:29.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:29.764 10:22:23 -- rpc/rpc.sh@65 -- # spdk_pid=105222 00:08:29.764 10:22:23 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.764 10:22:23 -- rpc/rpc.sh@67 -- # waitforlisten 105222 00:08:29.764 10:22:23 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:29.764 10:22:23 -- common/autotest_common.sh@819 -- # '[' -z 105222 ']' 00:08:29.764 10:22:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.764 10:22:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.764 10:22:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.764 10:22:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.764 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:08:29.764 [2024-07-12 10:22:23.556098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:29.764 [2024-07-12 10:22:23.556341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105222 ] 00:08:30.022 [2024-07-12 10:22:23.718206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.022 [2024-07-12 10:22:23.930011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.022 [2024-07-12 10:22:23.930279] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:30.022 [2024-07-12 10:22:23.930314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 105222' to capture a snapshot of events at runtime. 00:08:30.022 [2024-07-12 10:22:23.930350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid105222 for offline analysis/debug. 00:08:30.022 [2024-07-12 10:22:23.930439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.395 10:22:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.396 10:22:25 -- common/autotest_common.sh@852 -- # return 0 00:08:31.396 10:22:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:31.396 10:22:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:31.396 10:22:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:31.396 10:22:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:31.396 10:22:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:31.396 10:22:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.396 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.396 ************************************ 00:08:31.396 START TEST rpc_integrity 00:08:31.396 ************************************ 00:08:31.396 10:22:25 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:31.396 10:22:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:31.396 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.396 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.396 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.396 10:22:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:31.396 10:22:25 -- rpc/rpc.sh@13 -- # jq length 00:08:31.654 10:22:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:31.654 10:22:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:31.654 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.654 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.654 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.654 10:22:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:31.654 10:22:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:31.654 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.654 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.654 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.654 10:22:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:31.654 { 00:08:31.654 "name": "Malloc0", 00:08:31.654 "aliases": [ 00:08:31.654 "076bd570-660e-44ff-bea5-63815c90b0bf" 00:08:31.654 ], 00:08:31.654 "product_name": "Malloc disk", 00:08:31.654 "block_size": 512, 00:08:31.654 "num_blocks": 16384, 00:08:31.654 "uuid": "076bd570-660e-44ff-bea5-63815c90b0bf", 00:08:31.654 "assigned_rate_limits": { 00:08:31.654 "rw_ios_per_sec": 0, 00:08:31.654 "rw_mbytes_per_sec": 0, 00:08:31.654 "r_mbytes_per_sec": 0, 00:08:31.654 "w_mbytes_per_sec": 0 00:08:31.654 }, 00:08:31.654 "claimed": false, 00:08:31.654 "zoned": false, 00:08:31.654 "supported_io_types": { 00:08:31.654 "read": true, 00:08:31.654 "write": true, 00:08:31.654 "unmap": true, 00:08:31.654 "write_zeroes": true, 00:08:31.654 "flush": true, 00:08:31.654 "reset": true, 00:08:31.654 "compare": false, 00:08:31.654 "compare_and_write": false, 00:08:31.654 "abort": true, 00:08:31.654 "nvme_admin": false, 00:08:31.654 "nvme_io": false 00:08:31.654 }, 00:08:31.654 "memory_domains": [ 00:08:31.654 { 00:08:31.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.654 "dma_device_type": 2 00:08:31.654 } 00:08:31.654 ], 00:08:31.654 "driver_specific": {} 00:08:31.654 } 00:08:31.654 ]' 00:08:31.654 10:22:25 -- rpc/rpc.sh@17 -- # jq length 00:08:31.654 10:22:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:31.654 10:22:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:31.654 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.654 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.654 [2024-07-12 10:22:25.434941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:31.654 [2024-07-12 10:22:25.435049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.654 [2024-07-12 10:22:25.435097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:31.654 [2024-07-12 10:22:25.435123] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.654 [2024-07-12 10:22:25.437890] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.655 [2024-07-12 10:22:25.437972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:31.655 Passthru0 00:08:31.655 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.655 10:22:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:31.655 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.655 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.655 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.655 10:22:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:31.655 { 00:08:31.655 "name": "Malloc0", 00:08:31.655 "aliases": [ 00:08:31.655 "076bd570-660e-44ff-bea5-63815c90b0bf" 00:08:31.655 ], 00:08:31.655 "product_name": "Malloc disk", 00:08:31.655 "block_size": 512, 00:08:31.655 "num_blocks": 16384, 00:08:31.655 "uuid": "076bd570-660e-44ff-bea5-63815c90b0bf", 00:08:31.655 "assigned_rate_limits": { 00:08:31.655 "rw_ios_per_sec": 0, 00:08:31.655 "rw_mbytes_per_sec": 0, 00:08:31.655 "r_mbytes_per_sec": 0, 00:08:31.655 "w_mbytes_per_sec": 0 00:08:31.655 }, 00:08:31.655 "claimed": true, 00:08:31.655 "claim_type": "exclusive_write", 00:08:31.655 "zoned": false, 00:08:31.655 "supported_io_types": { 00:08:31.655 "read": true, 00:08:31.655 "write": true, 00:08:31.655 "unmap": true, 00:08:31.655 "write_zeroes": true, 00:08:31.655 "flush": true, 00:08:31.655 "reset": true, 00:08:31.655 "compare": false, 00:08:31.655 "compare_and_write": false, 00:08:31.655 "abort": true, 00:08:31.655 "nvme_admin": false, 00:08:31.655 "nvme_io": false 00:08:31.655 }, 00:08:31.655 "memory_domains": [ 00:08:31.655 { 00:08:31.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.655 "dma_device_type": 2 00:08:31.655 } 00:08:31.655 ], 00:08:31.655 "driver_specific": {} 00:08:31.655 }, 00:08:31.655 { 00:08:31.655 "name": "Passthru0", 00:08:31.655 "aliases": [ 00:08:31.655 "276dbbe2-83c6-51de-b702-23aeec2feaad" 00:08:31.655 ], 00:08:31.655 "product_name": "passthru", 00:08:31.655 "block_size": 512, 00:08:31.655 "num_blocks": 16384, 00:08:31.655 "uuid": "276dbbe2-83c6-51de-b702-23aeec2feaad", 00:08:31.655 "assigned_rate_limits": { 00:08:31.655 "rw_ios_per_sec": 0, 00:08:31.655 "rw_mbytes_per_sec": 0, 00:08:31.655 "r_mbytes_per_sec": 0, 00:08:31.655 "w_mbytes_per_sec": 0 00:08:31.655 }, 00:08:31.655 "claimed": false, 00:08:31.655 "zoned": false, 00:08:31.655 "supported_io_types": { 00:08:31.655 "read": true, 00:08:31.655 "write": true, 00:08:31.655 "unmap": true, 00:08:31.655 "write_zeroes": true, 00:08:31.655 "flush": true, 00:08:31.655 "reset": true, 00:08:31.655 "compare": false, 00:08:31.655 "compare_and_write": false, 00:08:31.655 "abort": true, 00:08:31.655 "nvme_admin": false, 00:08:31.655 "nvme_io": false 00:08:31.655 }, 00:08:31.655 "memory_domains": [ 00:08:31.655 { 00:08:31.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.655 "dma_device_type": 2 00:08:31.655 } 00:08:31.655 ], 00:08:31.655 "driver_specific": { 00:08:31.655 "passthru": { 00:08:31.655 "name": "Passthru0", 00:08:31.655 "base_bdev_name": "Malloc0" 00:08:31.655 } 00:08:31.655 } 00:08:31.655 } 00:08:31.655 ]' 00:08:31.655 10:22:25 -- rpc/rpc.sh@21 -- # jq length 00:08:31.655 10:22:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:31.655 10:22:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:31.655 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.655 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.655 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.655 10:22:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:31.655 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.655 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.655 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.655 10:22:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:31.655 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.655 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.655 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.655 10:22:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:31.655 10:22:25 -- rpc/rpc.sh@26 -- # jq length 00:08:31.913 10:22:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:31.913 00:08:31.913 real 0m0.343s 00:08:31.913 user 0m0.235s 00:08:31.913 sys 0m0.022s 00:08:31.913 ************************************ 00:08:31.913 END TEST rpc_integrity 00:08:31.913 ************************************ 00:08:31.913 10:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.913 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 10:22:25 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:31.913 10:22:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:31.913 10:22:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.913 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 ************************************ 00:08:31.913 START TEST rpc_plugins 00:08:31.913 ************************************ 00:08:31.913 10:22:25 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:31.913 10:22:25 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:31.913 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.913 10:22:25 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:31.913 10:22:25 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:31.913 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.913 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.913 10:22:25 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:31.913 { 00:08:31.913 "name": "Malloc1", 00:08:31.913 "aliases": [ 00:08:31.913 "b702d358-6b41-4c2e-8184-a440a4ea8ddb" 00:08:31.913 ], 00:08:31.913 "product_name": "Malloc disk", 00:08:31.913 "block_size": 4096, 00:08:31.913 "num_blocks": 256, 00:08:31.913 "uuid": "b702d358-6b41-4c2e-8184-a440a4ea8ddb", 00:08:31.914 "assigned_rate_limits": { 00:08:31.914 "rw_ios_per_sec": 0, 00:08:31.914 "rw_mbytes_per_sec": 0, 00:08:31.914 "r_mbytes_per_sec": 0, 00:08:31.914 "w_mbytes_per_sec": 0 00:08:31.914 }, 00:08:31.914 "claimed": false, 00:08:31.914 "zoned": false, 00:08:31.914 "supported_io_types": { 00:08:31.914 "read": true, 00:08:31.914 "write": true, 00:08:31.914 "unmap": true, 00:08:31.914 "write_zeroes": true, 00:08:31.914 "flush": true, 00:08:31.914 "reset": true, 00:08:31.914 "compare": false, 00:08:31.914 "compare_and_write": false, 00:08:31.914 "abort": true, 00:08:31.914 "nvme_admin": false, 00:08:31.914 "nvme_io": false 00:08:31.914 }, 00:08:31.914 "memory_domains": [ 00:08:31.914 { 00:08:31.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.914 "dma_device_type": 2 00:08:31.914 } 00:08:31.914 ], 00:08:31.914 "driver_specific": {} 00:08:31.914 } 00:08:31.914 ]' 00:08:31.914 10:22:25 -- rpc/rpc.sh@32 -- # jq length 00:08:31.914 10:22:25 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:31.914 10:22:25 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:31.914 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.914 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.914 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.914 10:22:25 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:31.914 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.914 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:31.914 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.914 10:22:25 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:31.914 10:22:25 -- rpc/rpc.sh@36 -- # jq length 00:08:31.914 10:22:25 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:31.914 00:08:31.914 real 0m0.170s 00:08:31.914 user 0m0.129s 00:08:31.914 sys 0m0.006s 00:08:31.914 ************************************ 00:08:31.914 END TEST rpc_plugins 00:08:31.914 ************************************ 00:08:31.914 10:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.914 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:32.172 10:22:25 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:32.172 10:22:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.172 10:22:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.172 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:32.172 ************************************ 00:08:32.172 START TEST rpc_trace_cmd_test 00:08:32.172 ************************************ 00:08:32.172 10:22:25 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:32.172 10:22:25 -- rpc/rpc.sh@40 -- # local info 00:08:32.172 10:22:25 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:32.172 10:22:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.172 10:22:25 -- common/autotest_common.sh@10 -- # set +x 00:08:32.172 10:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.172 10:22:25 -- rpc/rpc.sh@42 -- # info='{ 00:08:32.172 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid105222", 00:08:32.172 "tpoint_group_mask": "0x8", 00:08:32.172 "iscsi_conn": { 00:08:32.172 "mask": "0x2", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "scsi": { 00:08:32.172 "mask": "0x4", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "bdev": { 00:08:32.172 "mask": "0x8", 00:08:32.172 "tpoint_mask": "0xffffffffffffffff" 00:08:32.172 }, 00:08:32.172 "nvmf_rdma": { 00:08:32.172 "mask": "0x10", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "nvmf_tcp": { 00:08:32.172 "mask": "0x20", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "ftl": { 00:08:32.172 "mask": "0x40", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "blobfs": { 00:08:32.172 "mask": "0x80", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "dsa": { 00:08:32.172 "mask": "0x200", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "thread": { 00:08:32.172 "mask": "0x400", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "nvme_pcie": { 00:08:32.172 "mask": "0x800", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "iaa": { 00:08:32.172 "mask": "0x1000", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "nvme_tcp": { 00:08:32.172 "mask": "0x2000", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 }, 00:08:32.172 "bdev_nvme": { 00:08:32.172 "mask": "0x4000", 00:08:32.172 "tpoint_mask": "0x0" 00:08:32.172 } 00:08:32.172 }' 00:08:32.172 10:22:25 -- rpc/rpc.sh@43 -- # jq length 00:08:32.172 10:22:25 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:32.172 10:22:25 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:32.172 10:22:26 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:32.172 10:22:26 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:32.172 10:22:26 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:32.172 10:22:26 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:32.431 10:22:26 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:32.431 10:22:26 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:32.431 ************************************ 00:08:32.431 END TEST rpc_trace_cmd_test 00:08:32.431 ************************************ 00:08:32.431 10:22:26 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:32.431 00:08:32.431 real 0m0.327s 00:08:32.431 user 0m0.305s 00:08:32.431 sys 0m0.010s 00:08:32.431 10:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.431 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.431 10:22:26 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:32.431 10:22:26 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:32.431 10:22:26 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:32.431 10:22:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.431 10:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.431 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.431 ************************************ 00:08:32.431 START TEST rpc_daemon_integrity 00:08:32.431 ************************************ 00:08:32.431 10:22:26 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:32.431 10:22:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:32.431 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.431 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.431 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.431 10:22:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:32.431 10:22:26 -- rpc/rpc.sh@13 -- # jq length 00:08:32.431 10:22:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:32.431 10:22:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:32.431 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.431 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.431 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.431 10:22:26 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:32.431 10:22:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:32.431 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.431 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:32.690 { 00:08:32.690 "name": "Malloc2", 00:08:32.690 "aliases": [ 00:08:32.690 "48a95f96-11cc-48b6-a3dd-c48e96864323" 00:08:32.690 ], 00:08:32.690 "product_name": "Malloc disk", 00:08:32.690 "block_size": 512, 00:08:32.690 "num_blocks": 16384, 00:08:32.690 "uuid": "48a95f96-11cc-48b6-a3dd-c48e96864323", 00:08:32.690 "assigned_rate_limits": { 00:08:32.690 "rw_ios_per_sec": 0, 00:08:32.690 "rw_mbytes_per_sec": 0, 00:08:32.690 "r_mbytes_per_sec": 0, 00:08:32.690 "w_mbytes_per_sec": 0 00:08:32.690 }, 00:08:32.690 "claimed": false, 00:08:32.690 "zoned": false, 00:08:32.690 "supported_io_types": { 00:08:32.690 "read": true, 00:08:32.690 "write": true, 00:08:32.690 "unmap": true, 00:08:32.690 "write_zeroes": true, 00:08:32.690 "flush": true, 00:08:32.690 "reset": true, 00:08:32.690 "compare": false, 00:08:32.690 "compare_and_write": false, 00:08:32.690 "abort": true, 00:08:32.690 "nvme_admin": false, 00:08:32.690 "nvme_io": false 00:08:32.690 }, 00:08:32.690 "memory_domains": [ 00:08:32.690 { 00:08:32.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.690 "dma_device_type": 2 00:08:32.690 } 00:08:32.690 ], 00:08:32.690 "driver_specific": {} 00:08:32.690 } 00:08:32.690 ]' 00:08:32.690 10:22:26 -- rpc/rpc.sh@17 -- # jq length 00:08:32.690 10:22:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.690 10:22:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:32.690 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.690 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 [2024-07-12 10:22:26.432565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:32.690 [2024-07-12 10:22:26.432804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.690 [2024-07-12 10:22:26.432880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.690 [2024-07-12 10:22:26.433002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.690 [2024-07-12 10:22:26.435859] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.690 [2024-07-12 10:22:26.436042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.690 Passthru0 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.690 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.690 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.690 { 00:08:32.690 "name": "Malloc2", 00:08:32.690 "aliases": [ 00:08:32.690 "48a95f96-11cc-48b6-a3dd-c48e96864323" 00:08:32.690 ], 00:08:32.690 "product_name": "Malloc disk", 00:08:32.690 "block_size": 512, 00:08:32.690 "num_blocks": 16384, 00:08:32.690 "uuid": "48a95f96-11cc-48b6-a3dd-c48e96864323", 00:08:32.690 "assigned_rate_limits": { 00:08:32.690 "rw_ios_per_sec": 0, 00:08:32.690 "rw_mbytes_per_sec": 0, 00:08:32.690 "r_mbytes_per_sec": 0, 00:08:32.690 "w_mbytes_per_sec": 0 00:08:32.690 }, 00:08:32.690 "claimed": true, 00:08:32.690 "claim_type": "exclusive_write", 00:08:32.690 "zoned": false, 00:08:32.690 "supported_io_types": { 00:08:32.690 "read": true, 00:08:32.690 "write": true, 00:08:32.690 "unmap": true, 00:08:32.690 "write_zeroes": true, 00:08:32.690 "flush": true, 00:08:32.690 "reset": true, 00:08:32.690 "compare": false, 00:08:32.690 "compare_and_write": false, 00:08:32.690 "abort": true, 00:08:32.690 "nvme_admin": false, 00:08:32.690 "nvme_io": false 00:08:32.690 }, 00:08:32.690 "memory_domains": [ 00:08:32.690 { 00:08:32.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.690 "dma_device_type": 2 00:08:32.690 } 00:08:32.690 ], 00:08:32.690 "driver_specific": {} 00:08:32.690 }, 00:08:32.690 { 00:08:32.690 "name": "Passthru0", 00:08:32.690 "aliases": [ 00:08:32.690 "563c9bfc-6cfc-5e8e-932d-0e0d0364a00b" 00:08:32.690 ], 00:08:32.690 "product_name": "passthru", 00:08:32.690 "block_size": 512, 00:08:32.690 "num_blocks": 16384, 00:08:32.690 "uuid": "563c9bfc-6cfc-5e8e-932d-0e0d0364a00b", 00:08:32.690 "assigned_rate_limits": { 00:08:32.690 "rw_ios_per_sec": 0, 00:08:32.690 "rw_mbytes_per_sec": 0, 00:08:32.690 "r_mbytes_per_sec": 0, 00:08:32.690 "w_mbytes_per_sec": 0 00:08:32.690 }, 00:08:32.690 "claimed": false, 00:08:32.690 "zoned": false, 00:08:32.690 "supported_io_types": { 00:08:32.690 "read": true, 00:08:32.690 "write": true, 00:08:32.690 "unmap": true, 00:08:32.690 "write_zeroes": true, 00:08:32.690 "flush": true, 00:08:32.690 "reset": true, 00:08:32.690 "compare": false, 00:08:32.690 "compare_and_write": false, 00:08:32.690 "abort": true, 00:08:32.690 "nvme_admin": false, 00:08:32.690 "nvme_io": false 00:08:32.690 }, 00:08:32.690 "memory_domains": [ 00:08:32.690 { 00:08:32.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.690 "dma_device_type": 2 00:08:32.690 } 00:08:32.690 ], 00:08:32.690 "driver_specific": { 00:08:32.690 "passthru": { 00:08:32.690 "name": "Passthru0", 00:08:32.690 "base_bdev_name": "Malloc2" 00:08:32.690 } 00:08:32.690 } 00:08:32.690 } 00:08:32.690 ]' 00:08:32.690 10:22:26 -- rpc/rpc.sh@21 -- # jq length 00:08:32.690 10:22:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.690 10:22:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.690 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.690 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:32.690 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.690 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.690 10:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.690 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.690 10:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.690 10:22:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.690 10:22:26 -- rpc/rpc.sh@26 -- # jq length 00:08:32.948 10:22:26 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.948 00:08:32.948 real 0m0.372s 00:08:32.948 user 0m0.243s 00:08:32.948 sys 0m0.027s 00:08:32.948 10:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.948 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:32.948 ************************************ 00:08:32.948 END TEST rpc_daemon_integrity 00:08:32.948 ************************************ 00:08:32.948 10:22:26 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:32.948 10:22:26 -- rpc/rpc.sh@84 -- # killprocess 105222 00:08:32.948 10:22:26 -- common/autotest_common.sh@926 -- # '[' -z 105222 ']' 00:08:32.948 10:22:26 -- common/autotest_common.sh@930 -- # kill -0 105222 00:08:32.948 10:22:26 -- common/autotest_common.sh@931 -- # uname 00:08:32.948 10:22:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.948 10:22:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105222 00:08:32.948 killing process with pid 105222 00:08:32.948 10:22:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.948 10:22:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.948 10:22:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105222' 00:08:32.948 10:22:26 -- common/autotest_common.sh@945 -- # kill 105222 00:08:32.948 10:22:26 -- common/autotest_common.sh@950 -- # wait 105222 00:08:35.478 ************************************ 00:08:35.478 END TEST rpc 00:08:35.478 ************************************ 00:08:35.478 00:08:35.478 real 0m5.632s 00:08:35.478 user 0m6.709s 00:08:35.478 sys 0m0.786s 00:08:35.478 10:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.478 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 10:22:29 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:35.478 10:22:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:35.478 10:22:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.478 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 ************************************ 00:08:35.478 START TEST rpc_client 00:08:35.478 ************************************ 00:08:35.478 10:22:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:35.478 * Looking for test storage... 00:08:35.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:35.478 10:22:29 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:35.478 OK 00:08:35.478 10:22:29 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:35.478 00:08:35.478 real 0m0.137s 00:08:35.478 user 0m0.079s 00:08:35.478 sys 0m0.069s 00:08:35.478 ************************************ 00:08:35.478 END TEST rpc_client 00:08:35.478 ************************************ 00:08:35.478 10:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.478 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 10:22:29 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:35.478 10:22:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:35.478 10:22:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.478 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 ************************************ 00:08:35.478 START TEST json_config 00:08:35.478 ************************************ 00:08:35.478 10:22:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:35.478 10:22:29 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.478 10:22:29 -- nvmf/common.sh@7 -- # uname -s 00:08:35.478 10:22:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.478 10:22:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.478 10:22:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.478 10:22:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.478 10:22:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.478 10:22:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.478 10:22:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.478 10:22:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.478 10:22:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.478 10:22:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.478 10:22:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:716cd334-57e8-44af-b566-393aac03c86c 00:08:35.478 10:22:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=716cd334-57e8-44af-b566-393aac03c86c 00:08:35.478 10:22:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.478 10:22:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.478 10:22:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:35.478 10:22:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.478 10:22:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.478 10:22:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.478 10:22:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.478 10:22:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:35.478 10:22:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:35.478 10:22:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:35.478 10:22:29 -- paths/export.sh@5 -- # export PATH 00:08:35.478 10:22:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:35.478 10:22:29 -- nvmf/common.sh@46 -- # : 0 00:08:35.478 10:22:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:35.478 10:22:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:35.478 10:22:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:35.478 10:22:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.479 10:22:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.479 10:22:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:35.479 10:22:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:35.479 10:22:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:35.479 10:22:29 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:35.479 10:22:29 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:08:35.479 10:22:29 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:35.479 10:22:29 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:08:35.479 10:22:29 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:35.479 10:22:29 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:08:35.479 10:22:29 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:35.479 10:22:29 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:08:35.479 10:22:29 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:35.479 10:22:29 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:35.479 10:22:29 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:35.479 10:22:29 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:35.479 INFO: JSON configuration test init 00:08:35.479 10:22:29 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:35.479 10:22:29 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:35.479 10:22:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.479 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.479 10:22:29 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:35.479 10:22:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.479 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.479 10:22:29 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:35.479 10:22:29 -- json_config/json_config.sh@98 -- # local app=target 00:08:35.479 10:22:29 -- json_config/json_config.sh@99 -- # shift 00:08:35.479 10:22:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:35.479 10:22:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:35.479 10:22:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=105525 00:08:35.479 10:22:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:35.479 Waiting for target to run... 00:08:35.479 10:22:29 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:35.479 10:22:29 -- json_config/json_config.sh@114 -- # waitforlisten 105525 /var/tmp/spdk_tgt.sock 00:08:35.479 10:22:29 -- common/autotest_common.sh@819 -- # '[' -z 105525 ']' 00:08:35.479 10:22:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:35.479 10:22:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:35.479 10:22:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:35.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:35.479 10:22:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:35.479 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:08:35.737 [2024-07-12 10:22:29.425754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.737 [2024-07-12 10:22:29.426116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105525 ] 00:08:35.995 [2024-07-12 10:22:29.897493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.253 [2024-07-12 10:22:30.091918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.253 [2024-07-12 10:22:30.092411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.512 10:22:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:36.512 10:22:30 -- common/autotest_common.sh@852 -- # return 0 00:08:36.512 00:08:36.512 10:22:30 -- json_config/json_config.sh@115 -- # echo '' 00:08:36.512 10:22:30 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:36.512 10:22:30 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:36.512 10:22:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:36.512 10:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 10:22:30 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:36.512 10:22:30 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:36.512 10:22:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:36.512 10:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 10:22:30 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:36.512 10:22:30 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:36.512 10:22:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:37.886 10:22:31 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:37.886 10:22:31 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:37.886 10:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.886 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 10:22:31 -- json_config/json_config.sh@48 -- # local ret=0 00:08:37.886 10:22:31 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:08:37.886 10:22:31 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:37.886 10:22:31 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:08:37.886 10:22:31 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:37.886 10:22:31 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:37.887 10:22:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:37.887 10:22:31 -- json_config/json_config.sh@51 -- # local get_types 00:08:37.887 10:22:31 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:37.887 10:22:31 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:37.887 10:22:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.887 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:37.887 10:22:31 -- json_config/json_config.sh@58 -- # return 0 00:08:37.887 10:22:31 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:37.887 10:22:31 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:37.887 10:22:31 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:37.887 10:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.887 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:37.887 10:22:31 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:37.887 10:22:31 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:37.887 10:22:31 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:37.887 10:22:31 -- json_config/json_config.sh@164 -- # get_notifications 00:08:37.887 10:22:31 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:37.887 10:22:31 -- json_config/json_config.sh@64 -- # IFS=: 00:08:37.887 10:22:31 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:37.887 10:22:31 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:37.887 10:22:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:37.887 10:22:31 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:38.145 10:22:31 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:38.145 10:22:31 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.145 10:22:31 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.145 10:22:31 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:38.145 10:22:31 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:38.145 10:22:31 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:38.145 10:22:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:38.403 Nvme0n1p0 Nvme0n1p1 00:08:38.403 10:22:32 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:38.403 10:22:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:38.661 [2024-07-12 10:22:32.567830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:38.661 [2024-07-12 10:22:32.568230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:38.661 00:08:38.661 10:22:32 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:38.661 10:22:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:38.919 Malloc3 00:08:39.178 10:22:32 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:39.178 10:22:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:39.178 [2024-07-12 10:22:33.069559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:39.178 [2024-07-12 10:22:33.069938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.178 [2024-07-12 10:22:33.070020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:39.178 [2024-07-12 10:22:33.070312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.178 [2024-07-12 10:22:33.072932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.178 [2024-07-12 10:22:33.073190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:39.178 PTBdevFromMalloc3 00:08:39.178 10:22:33 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:39.178 10:22:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:39.436 Null0 00:08:39.436 10:22:33 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:39.436 10:22:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:40.003 Malloc0 00:08:40.003 10:22:33 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:40.003 10:22:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:40.003 Malloc1 00:08:40.261 10:22:33 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:40.261 10:22:33 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:40.519 102400+0 records in 00:08:40.519 102400+0 records out 00:08:40.519 104857600 bytes (105 MB, 100 MiB) copied, 0.350351 s, 299 MB/s 00:08:40.519 10:22:34 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:40.519 10:22:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:40.777 aio_disk 00:08:40.777 10:22:34 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:40.777 10:22:34 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:40.777 10:22:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:41.036 3387b33c-dcff-43cb-8e43-8ea40cb137a4 00:08:41.036 10:22:34 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:41.036 10:22:34 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:41.036 10:22:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:41.294 10:22:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:41.294 10:22:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:41.552 10:22:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:41.552 10:22:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:41.810 10:22:35 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:41.810 10:22:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:42.068 10:22:35 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:42.068 10:22:35 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:42.068 10:22:35 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:197fe394-06d2-4393-aed7-020695cce323 bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 00:08:42.068 10:22:35 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:42.068 10:22:35 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:42.069 10:22:35 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:42.069 10:22:35 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:197fe394-06d2-4393-aed7-020695cce323 bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 00:08:42.069 10:22:35 -- json_config/json_config.sh@74 -- # sort 00:08:42.069 10:22:35 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:42.069 10:22:35 -- json_config/json_config.sh@75 -- # get_notifications 00:08:42.069 10:22:35 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:42.069 10:22:35 -- json_config/json_config.sh@75 -- # sort 00:08:42.069 10:22:35 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.069 10:22:35 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.069 10:22:35 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:42.069 10:22:35 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:42.069 10:22:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:197fe394-06d2-4393-aed7-020695cce323 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@65 -- # echo bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # IFS=: 00:08:42.328 10:22:36 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:42.328 10:22:36 -- json_config/json_config.sh@77 -- # [[ bdev_register:197fe394-06d2-4393-aed7-020695cce323 bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\9\7\f\e\3\9\4\-\0\6\d\2\-\4\3\9\3\-\a\e\d\7\-\0\2\0\6\9\5\c\c\e\3\2\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\4\4\2\7\c\7\1\-\f\d\d\5\-\4\f\3\1\-\8\6\2\4\-\7\2\d\3\4\5\1\d\3\5\5\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\0\5\6\5\8\c\6\-\b\4\3\f\-\4\d\d\f\-\8\9\8\3\-\6\1\b\6\f\f\e\c\8\2\9\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\e\7\3\2\e\2\a\-\2\0\7\b\-\4\e\8\2\-\b\4\f\8\-\6\e\8\7\4\6\8\6\d\8\3\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:08:42.328 10:22:36 -- json_config/json_config.sh@89 -- # cat 00:08:42.328 10:22:36 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:197fe394-06d2-4393-aed7-020695cce323 bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 bdev_register:aio_disk 00:08:42.328 Expected events matched: 00:08:42.328 bdev_register:197fe394-06d2-4393-aed7-020695cce323 00:08:42.328 bdev_register:24427c71-fdd5-4f31-8624-72d3451d355f 00:08:42.328 bdev_register:405658c6-b43f-4ddf-8983-61b6ffec8297 00:08:42.328 bdev_register:Malloc0 00:08:42.328 bdev_register:Malloc0p0 00:08:42.328 bdev_register:Malloc0p1 00:08:42.328 bdev_register:Malloc0p2 00:08:42.328 bdev_register:Malloc1 00:08:42.328 bdev_register:Malloc3 00:08:42.328 bdev_register:Null0 00:08:42.328 bdev_register:Nvme0n1 00:08:42.328 bdev_register:Nvme0n1p0 00:08:42.328 bdev_register:Nvme0n1p1 00:08:42.328 bdev_register:PTBdevFromMalloc3 00:08:42.328 bdev_register:ae732e2a-207b-4e82-b4f8-6e874686d834 00:08:42.328 bdev_register:aio_disk 00:08:42.328 10:22:36 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:42.328 10:22:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:42.328 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:42.328 10:22:36 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:42.328 10:22:36 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:42.328 10:22:36 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:42.328 10:22:36 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:42.328 10:22:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:42.328 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:42.328 10:22:36 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:42.328 10:22:36 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:42.328 10:22:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:42.587 MallocBdevForConfigChangeCheck 00:08:42.587 10:22:36 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:42.587 10:22:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:42.587 10:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:42.587 10:22:36 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:42.587 10:22:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:43.154 INFO: shutting down applications... 00:08:43.154 10:22:36 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:43.154 10:22:36 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:43.154 10:22:36 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:43.154 10:22:36 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:43.154 10:22:36 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:43.154 [2024-07-12 10:22:37.045282] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:43.412 Calling clear_vhost_scsi_subsystem 00:08:43.412 Calling clear_iscsi_subsystem 00:08:43.412 Calling clear_vhost_blk_subsystem 00:08:43.412 Calling clear_nbd_subsystem 00:08:43.412 Calling clear_nvmf_subsystem 00:08:43.412 Calling clear_bdev_subsystem 00:08:43.412 Calling clear_accel_subsystem 00:08:43.412 Calling clear_iobuf_subsystem 00:08:43.412 Calling clear_sock_subsystem 00:08:43.412 Calling clear_vmd_subsystem 00:08:43.412 Calling clear_scheduler_subsystem 00:08:43.412 10:22:37 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:43.412 10:22:37 -- json_config/json_config.sh@396 -- # count=100 00:08:43.412 10:22:37 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:43.412 10:22:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:43.412 10:22:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:43.412 10:22:37 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:43.978 10:22:37 -- json_config/json_config.sh@398 -- # break 00:08:43.978 10:22:37 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:43.978 10:22:37 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:43.978 10:22:37 -- json_config/json_config.sh@120 -- # local app=target 00:08:43.978 10:22:37 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:43.978 10:22:37 -- json_config/json_config.sh@124 -- # [[ -n 105525 ]] 00:08:43.978 10:22:37 -- json_config/json_config.sh@127 -- # kill -SIGINT 105525 00:08:43.978 10:22:37 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:43.978 10:22:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:43.978 10:22:37 -- json_config/json_config.sh@130 -- # kill -0 105525 00:08:43.978 10:22:37 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:44.237 10:22:38 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:44.237 10:22:38 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:44.237 10:22:38 -- json_config/json_config.sh@130 -- # kill -0 105525 00:08:44.237 10:22:38 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:44.804 SPDK target shutdown done 00:08:44.804 INFO: relaunching applications... 00:08:44.804 Waiting for target to run... 00:08:44.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:44.804 10:22:38 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:44.804 10:22:38 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:44.804 10:22:38 -- json_config/json_config.sh@130 -- # kill -0 105525 00:08:44.804 10:22:38 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:44.804 10:22:38 -- json_config/json_config.sh@132 -- # break 00:08:44.804 10:22:38 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:44.804 10:22:38 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:44.804 10:22:38 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:44.804 10:22:38 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:44.804 10:22:38 -- json_config/json_config.sh@98 -- # local app=target 00:08:44.804 10:22:38 -- json_config/json_config.sh@99 -- # shift 00:08:44.804 10:22:38 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:44.804 10:22:38 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:44.804 10:22:38 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:44.804 10:22:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:44.804 10:22:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:44.804 10:22:38 -- json_config/json_config.sh@111 -- # app_pid[$app]=105813 00:08:44.804 10:22:38 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:44.804 10:22:38 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:44.804 10:22:38 -- json_config/json_config.sh@114 -- # waitforlisten 105813 /var/tmp/spdk_tgt.sock 00:08:44.804 10:22:38 -- common/autotest_common.sh@819 -- # '[' -z 105813 ']' 00:08:44.804 10:22:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:44.804 10:22:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.804 10:22:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:44.804 10:22:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.804 10:22:38 -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 [2024-07-12 10:22:38.700080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:44.804 [2024-07-12 10:22:38.700518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105813 ] 00:08:45.371 [2024-07-12 10:22:39.169626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.629 [2024-07-12 10:22:39.362868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.629 [2024-07-12 10:22:39.363393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.195 [2024-07-12 10:22:40.051973] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:46.195 [2024-07-12 10:22:40.052311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:46.195 [2024-07-12 10:22:40.059929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:46.195 [2024-07-12 10:22:40.060173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:46.195 [2024-07-12 10:22:40.067930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:46.196 [2024-07-12 10:22:40.068228] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:46.196 [2024-07-12 10:22:40.068356] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:46.463 [2024-07-12 10:22:40.161830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:46.463 [2024-07-12 10:22:40.162050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.463 [2024-07-12 10:22:40.162205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:46.463 [2024-07-12 10:22:40.162390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.463 [2024-07-12 10:22:40.163016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.463 [2024-07-12 10:22:40.163170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:46.727 00:08:46.727 INFO: Checking if target configuration is the same... 00:08:46.727 10:22:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.727 10:22:40 -- common/autotest_common.sh@852 -- # return 0 00:08:46.728 10:22:40 -- json_config/json_config.sh@115 -- # echo '' 00:08:46.728 10:22:40 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:46.728 10:22:40 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:46.728 10:22:40 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:46.728 10:22:40 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:46.728 10:22:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:46.728 + '[' 2 -ne 2 ']' 00:08:46.728 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:46.728 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:46.728 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:46.728 +++ basename /dev/fd/62 00:08:46.728 ++ mktemp /tmp/62.XXX 00:08:46.728 + tmp_file_1=/tmp/62.UEE 00:08:46.728 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:46.728 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:46.728 + tmp_file_2=/tmp/spdk_tgt_config.json.8A3 00:08:46.728 + ret=0 00:08:46.728 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:46.985 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:46.985 + diff -u /tmp/62.UEE /tmp/spdk_tgt_config.json.8A3 00:08:46.985 + echo 'INFO: JSON config files are the same' 00:08:46.985 INFO: JSON config files are the same 00:08:46.985 + rm /tmp/62.UEE /tmp/spdk_tgt_config.json.8A3 00:08:46.985 + exit 0 00:08:46.985 INFO: changing configuration and checking if this can be detected... 00:08:46.985 10:22:40 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:46.985 10:22:40 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:46.985 10:22:40 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:46.986 10:22:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:47.243 10:22:41 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:47.243 10:22:41 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:47.243 10:22:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:47.243 + '[' 2 -ne 2 ']' 00:08:47.243 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:47.501 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:47.501 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:47.501 +++ basename /dev/fd/62 00:08:47.501 ++ mktemp /tmp/62.XXX 00:08:47.501 + tmp_file_1=/tmp/62.02h 00:08:47.501 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:47.501 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:47.501 + tmp_file_2=/tmp/spdk_tgt_config.json.7qP 00:08:47.501 + ret=0 00:08:47.501 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:47.760 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:47.760 + diff -u /tmp/62.02h /tmp/spdk_tgt_config.json.7qP 00:08:47.760 + ret=1 00:08:47.760 + echo '=== Start of file: /tmp/62.02h ===' 00:08:47.760 + cat /tmp/62.02h 00:08:47.760 + echo '=== End of file: /tmp/62.02h ===' 00:08:47.760 + echo '' 00:08:47.760 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7qP ===' 00:08:47.760 + cat /tmp/spdk_tgt_config.json.7qP 00:08:47.760 + echo '=== End of file: /tmp/spdk_tgt_config.json.7qP ===' 00:08:47.760 + echo '' 00:08:47.760 + rm /tmp/62.02h /tmp/spdk_tgt_config.json.7qP 00:08:47.760 + exit 1 00:08:47.760 INFO: configuration change detected. 00:08:47.760 10:22:41 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:47.760 10:22:41 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:47.760 10:22:41 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:47.760 10:22:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:47.760 10:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:47.760 10:22:41 -- json_config/json_config.sh@360 -- # local ret=0 00:08:47.760 10:22:41 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:47.760 10:22:41 -- json_config/json_config.sh@370 -- # [[ -n 105813 ]] 00:08:47.760 10:22:41 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:47.760 10:22:41 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:47.760 10:22:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:47.760 10:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:47.760 10:22:41 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:47.760 10:22:41 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:47.760 10:22:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:48.018 10:22:41 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:48.018 10:22:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:48.277 10:22:42 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:48.277 10:22:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:48.536 10:22:42 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:48.536 10:22:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:48.795 10:22:42 -- json_config/json_config.sh@246 -- # uname -s 00:08:48.795 10:22:42 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:48.795 10:22:42 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:48.795 10:22:42 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:48.795 10:22:42 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:48.795 10:22:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:48.795 10:22:42 -- common/autotest_common.sh@10 -- # set +x 00:08:48.795 10:22:42 -- json_config/json_config.sh@376 -- # killprocess 105813 00:08:48.795 10:22:42 -- common/autotest_common.sh@926 -- # '[' -z 105813 ']' 00:08:48.795 10:22:42 -- common/autotest_common.sh@930 -- # kill -0 105813 00:08:48.795 10:22:42 -- common/autotest_common.sh@931 -- # uname 00:08:48.795 10:22:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.795 10:22:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105813 00:08:48.795 10:22:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.795 10:22:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.795 killing process with pid 105813 00:08:48.795 10:22:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105813' 00:08:48.795 10:22:42 -- common/autotest_common.sh@945 -- # kill 105813 00:08:48.795 10:22:42 -- common/autotest_common.sh@950 -- # wait 105813 00:08:50.171 10:22:43 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:50.171 10:22:43 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:50.171 10:22:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.171 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.171 INFO: Success 00:08:50.171 10:22:43 -- json_config/json_config.sh@381 -- # return 0 00:08:50.171 10:22:43 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:50.171 ************************************ 00:08:50.171 END TEST json_config 00:08:50.171 ************************************ 00:08:50.171 00:08:50.171 real 0m14.477s 00:08:50.171 user 0m21.287s 00:08:50.171 sys 0m2.530s 00:08:50.171 10:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.171 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.171 10:22:43 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:50.171 10:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.171 10:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.171 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.171 ************************************ 00:08:50.171 START TEST json_config_extra_key 00:08:50.171 ************************************ 00:08:50.171 10:22:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.171 10:22:43 -- nvmf/common.sh@7 -- # uname -s 00:08:50.171 10:22:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.171 10:22:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.171 10:22:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.171 10:22:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.171 10:22:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.171 10:22:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.171 10:22:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.171 10:22:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.171 10:22:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.171 10:22:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.171 10:22:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1256f9c5-8d74-4a4b-bf0d-bae185666481 00:08:50.171 10:22:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=1256f9c5-8d74-4a4b-bf0d-bae185666481 00:08:50.171 10:22:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.171 10:22:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.171 10:22:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:50.171 10:22:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.171 10:22:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.171 10:22:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.171 10:22:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.171 10:22:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:50.171 10:22:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:50.171 10:22:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:50.171 10:22:43 -- paths/export.sh@5 -- # export PATH 00:08:50.171 10:22:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:50.171 10:22:43 -- nvmf/common.sh@46 -- # : 0 00:08:50.171 10:22:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.171 10:22:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.171 10:22:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.171 10:22:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.171 10:22:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.171 10:22:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.171 10:22:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.171 10:22:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:50.171 INFO: launching applications... 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=106009 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:50.171 Waiting for target to run... 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:50.171 10:22:43 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 106009 /var/tmp/spdk_tgt.sock 00:08:50.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:50.171 10:22:43 -- common/autotest_common.sh@819 -- # '[' -z 106009 ']' 00:08:50.171 10:22:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:50.171 10:22:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.171 10:22:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:50.171 10:22:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.171 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.171 [2024-07-12 10:22:43.931559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:50.171 [2024-07-12 10:22:43.932562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106009 ] 00:08:50.737 [2024-07-12 10:22:44.396088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.737 [2024-07-12 10:22:44.566615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.737 [2024-07-12 10:22:44.567019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.109 00:08:52.109 INFO: shutting down applications... 00:08:52.109 10:22:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.109 10:22:45 -- common/autotest_common.sh@852 -- # return 0 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 106009 ]] 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 106009 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:52.109 10:22:45 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:52.367 10:22:46 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:52.367 10:22:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:52.367 10:22:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:52.367 10:22:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:52.933 10:22:46 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:52.933 10:22:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:52.933 10:22:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:52.933 10:22:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:53.498 10:22:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:53.498 10:22:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:53.498 10:22:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:53.498 10:22:47 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:53.756 10:22:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:53.756 10:22:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:53.756 10:22:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:53.756 10:22:47 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:54.322 SPDK target shutdown done 00:08:54.322 Success 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@50 -- # kill -0 106009 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:54.322 10:22:48 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:54.322 00:08:54.322 real 0m4.342s 00:08:54.322 user 0m4.175s 00:08:54.322 sys 0m0.546s 00:08:54.322 ************************************ 00:08:54.322 END TEST json_config_extra_key 00:08:54.322 ************************************ 00:08:54.322 10:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.322 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:54.322 10:22:48 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.322 10:22:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.322 10:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.322 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:54.322 ************************************ 00:08:54.322 START TEST alias_rpc 00:08:54.322 ************************************ 00:08:54.322 10:22:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.322 * Looking for test storage... 00:08:54.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:54.322 10:22:48 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:54.580 10:22:48 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=106118 00:08:54.580 10:22:48 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.580 10:22:48 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 106118 00:08:54.580 10:22:48 -- common/autotest_common.sh@819 -- # '[' -z 106118 ']' 00:08:54.580 10:22:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.580 10:22:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.580 10:22:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.580 10:22:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.580 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:54.580 [2024-07-12 10:22:48.319379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:54.580 [2024-07-12 10:22:48.319896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106118 ] 00:08:54.580 [2024-07-12 10:22:48.489507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.838 [2024-07-12 10:22:48.733272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.838 [2024-07-12 10:22:48.733931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.212 10:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.212 10:22:50 -- common/autotest_common.sh@852 -- # return 0 00:08:56.212 10:22:50 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:56.470 10:22:50 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 106118 00:08:56.470 10:22:50 -- common/autotest_common.sh@926 -- # '[' -z 106118 ']' 00:08:56.470 10:22:50 -- common/autotest_common.sh@930 -- # kill -0 106118 00:08:56.470 10:22:50 -- common/autotest_common.sh@931 -- # uname 00:08:56.470 10:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:56.470 10:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106118 00:08:56.470 killing process with pid 106118 00:08:56.470 10:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:56.470 10:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:56.470 10:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106118' 00:08:56.470 10:22:50 -- common/autotest_common.sh@945 -- # kill 106118 00:08:56.470 10:22:50 -- common/autotest_common.sh@950 -- # wait 106118 00:08:59.001 ************************************ 00:08:59.001 END TEST alias_rpc 00:08:59.001 ************************************ 00:08:59.001 00:08:59.001 real 0m4.560s 00:08:59.001 user 0m4.892s 00:08:59.001 sys 0m0.623s 00:08:59.001 10:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.001 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:59.001 10:22:52 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:59.001 10:22:52 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:59.001 10:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.001 10:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.001 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:59.001 ************************************ 00:08:59.001 START TEST spdkcli_tcp 00:08:59.001 ************************************ 00:08:59.001 10:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:59.001 * Looking for test storage... 00:08:59.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:59.001 10:22:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:59.001 10:22:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:59.001 10:22:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:59.001 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=106256 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@27 -- # waitforlisten 106256 00:08:59.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.001 10:22:52 -- common/autotest_common.sh@819 -- # '[' -z 106256 ']' 00:08:59.001 10:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.001 10:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.001 10:22:52 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:59.001 10:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.001 10:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.001 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 [2024-07-12 10:22:52.940644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:59.259 [2024-07-12 10:22:52.941567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106256 ] 00:08:59.259 [2024-07-12 10:22:53.113537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.516 [2024-07-12 10:22:53.345805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.516 [2024-07-12 10:22:53.346326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.516 [2024-07-12 10:22:53.346333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.891 10:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:00.891 10:22:54 -- common/autotest_common.sh@852 -- # return 0 00:09:00.891 10:22:54 -- spdkcli/tcp.sh@31 -- # socat_pid=106291 00:09:00.891 10:22:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:00.891 10:22:54 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:01.149 [ 00:09:01.149 "spdk_get_version", 00:09:01.149 "rpc_get_methods", 00:09:01.149 "trace_get_info", 00:09:01.149 "trace_get_tpoint_group_mask", 00:09:01.149 "trace_disable_tpoint_group", 00:09:01.149 "trace_enable_tpoint_group", 00:09:01.149 "trace_clear_tpoint_mask", 00:09:01.149 "trace_set_tpoint_mask", 00:09:01.149 "framework_get_pci_devices", 00:09:01.149 "framework_get_config", 00:09:01.149 "framework_get_subsystems", 00:09:01.149 "iobuf_get_stats", 00:09:01.149 "iobuf_set_options", 00:09:01.149 "sock_set_default_impl", 00:09:01.149 "sock_impl_set_options", 00:09:01.149 "sock_impl_get_options", 00:09:01.149 "vmd_rescan", 00:09:01.149 "vmd_remove_device", 00:09:01.149 "vmd_enable", 00:09:01.149 "accel_get_stats", 00:09:01.149 "accel_set_options", 00:09:01.149 "accel_set_driver", 00:09:01.149 "accel_crypto_key_destroy", 00:09:01.149 "accel_crypto_keys_get", 00:09:01.149 "accel_crypto_key_create", 00:09:01.149 "accel_assign_opc", 00:09:01.149 "accel_get_module_info", 00:09:01.149 "accel_get_opc_assignments", 00:09:01.149 "notify_get_notifications", 00:09:01.149 "notify_get_types", 00:09:01.149 "bdev_get_histogram", 00:09:01.149 "bdev_enable_histogram", 00:09:01.149 "bdev_set_qos_limit", 00:09:01.149 "bdev_set_qd_sampling_period", 00:09:01.149 "bdev_get_bdevs", 00:09:01.149 "bdev_reset_iostat", 00:09:01.150 "bdev_get_iostat", 00:09:01.150 "bdev_examine", 00:09:01.150 "bdev_wait_for_examine", 00:09:01.150 "bdev_set_options", 00:09:01.150 "scsi_get_devices", 00:09:01.150 "thread_set_cpumask", 00:09:01.150 "framework_get_scheduler", 00:09:01.150 "framework_set_scheduler", 00:09:01.150 "framework_get_reactors", 00:09:01.150 "thread_get_io_channels", 00:09:01.150 "thread_get_pollers", 00:09:01.150 "thread_get_stats", 00:09:01.150 "framework_monitor_context_switch", 00:09:01.150 "spdk_kill_instance", 00:09:01.150 "log_enable_timestamps", 00:09:01.150 "log_get_flags", 00:09:01.150 "log_clear_flag", 00:09:01.150 "log_set_flag", 00:09:01.150 "log_get_level", 00:09:01.150 "log_set_level", 00:09:01.150 "log_get_print_level", 00:09:01.150 "log_set_print_level", 00:09:01.150 "framework_enable_cpumask_locks", 00:09:01.150 "framework_disable_cpumask_locks", 00:09:01.150 "framework_wait_init", 00:09:01.150 "framework_start_init", 00:09:01.150 "virtio_blk_create_transport", 00:09:01.150 "virtio_blk_get_transports", 00:09:01.150 "vhost_controller_set_coalescing", 00:09:01.150 "vhost_get_controllers", 00:09:01.150 "vhost_delete_controller", 00:09:01.150 "vhost_create_blk_controller", 00:09:01.150 "vhost_scsi_controller_remove_target", 00:09:01.150 "vhost_scsi_controller_add_target", 00:09:01.150 "vhost_start_scsi_controller", 00:09:01.150 "vhost_create_scsi_controller", 00:09:01.150 "nbd_get_disks", 00:09:01.150 "nbd_stop_disk", 00:09:01.150 "nbd_start_disk", 00:09:01.150 "env_dpdk_get_mem_stats", 00:09:01.150 "nvmf_subsystem_get_listeners", 00:09:01.150 "nvmf_subsystem_get_qpairs", 00:09:01.150 "nvmf_subsystem_get_controllers", 00:09:01.150 "nvmf_get_stats", 00:09:01.150 "nvmf_get_transports", 00:09:01.150 "nvmf_create_transport", 00:09:01.150 "nvmf_get_targets", 00:09:01.150 "nvmf_delete_target", 00:09:01.150 "nvmf_create_target", 00:09:01.150 "nvmf_subsystem_allow_any_host", 00:09:01.150 "nvmf_subsystem_remove_host", 00:09:01.150 "nvmf_subsystem_add_host", 00:09:01.150 "nvmf_subsystem_remove_ns", 00:09:01.150 "nvmf_subsystem_add_ns", 00:09:01.150 "nvmf_subsystem_listener_set_ana_state", 00:09:01.150 "nvmf_discovery_get_referrals", 00:09:01.150 "nvmf_discovery_remove_referral", 00:09:01.150 "nvmf_discovery_add_referral", 00:09:01.150 "nvmf_subsystem_remove_listener", 00:09:01.150 "nvmf_subsystem_add_listener", 00:09:01.150 "nvmf_delete_subsystem", 00:09:01.150 "nvmf_create_subsystem", 00:09:01.150 "nvmf_get_subsystems", 00:09:01.150 "nvmf_set_crdt", 00:09:01.150 "nvmf_set_config", 00:09:01.150 "nvmf_set_max_subsystems", 00:09:01.150 "iscsi_set_options", 00:09:01.150 "iscsi_get_auth_groups", 00:09:01.150 "iscsi_auth_group_remove_secret", 00:09:01.150 "iscsi_auth_group_add_secret", 00:09:01.150 "iscsi_delete_auth_group", 00:09:01.150 "iscsi_create_auth_group", 00:09:01.150 "iscsi_set_discovery_auth", 00:09:01.150 "iscsi_get_options", 00:09:01.150 "iscsi_target_node_request_logout", 00:09:01.150 "iscsi_target_node_set_redirect", 00:09:01.150 "iscsi_target_node_set_auth", 00:09:01.150 "iscsi_target_node_add_lun", 00:09:01.150 "iscsi_get_connections", 00:09:01.150 "iscsi_portal_group_set_auth", 00:09:01.150 "iscsi_start_portal_group", 00:09:01.150 "iscsi_delete_portal_group", 00:09:01.150 "iscsi_create_portal_group", 00:09:01.150 "iscsi_get_portal_groups", 00:09:01.150 "iscsi_delete_target_node", 00:09:01.150 "iscsi_target_node_remove_pg_ig_maps", 00:09:01.150 "iscsi_target_node_add_pg_ig_maps", 00:09:01.150 "iscsi_create_target_node", 00:09:01.150 "iscsi_get_target_nodes", 00:09:01.150 "iscsi_delete_initiator_group", 00:09:01.150 "iscsi_initiator_group_remove_initiators", 00:09:01.150 "iscsi_initiator_group_add_initiators", 00:09:01.150 "iscsi_create_initiator_group", 00:09:01.150 "iscsi_get_initiator_groups", 00:09:01.150 "iaa_scan_accel_module", 00:09:01.150 "dsa_scan_accel_module", 00:09:01.150 "ioat_scan_accel_module", 00:09:01.150 "accel_error_inject_error", 00:09:01.150 "bdev_iscsi_delete", 00:09:01.150 "bdev_iscsi_create", 00:09:01.150 "bdev_iscsi_set_options", 00:09:01.150 "bdev_virtio_attach_controller", 00:09:01.150 "bdev_virtio_scsi_get_devices", 00:09:01.150 "bdev_virtio_detach_controller", 00:09:01.150 "bdev_virtio_blk_set_hotplug", 00:09:01.150 "bdev_ftl_set_property", 00:09:01.150 "bdev_ftl_get_properties", 00:09:01.150 "bdev_ftl_get_stats", 00:09:01.150 "bdev_ftl_unmap", 00:09:01.150 "bdev_ftl_unload", 00:09:01.150 "bdev_ftl_delete", 00:09:01.150 "bdev_ftl_load", 00:09:01.150 "bdev_ftl_create", 00:09:01.150 "bdev_aio_delete", 00:09:01.150 "bdev_aio_rescan", 00:09:01.150 "bdev_aio_create", 00:09:01.150 "blobfs_create", 00:09:01.150 "blobfs_detect", 00:09:01.150 "blobfs_set_cache_size", 00:09:01.150 "bdev_zone_block_delete", 00:09:01.150 "bdev_zone_block_create", 00:09:01.150 "bdev_delay_delete", 00:09:01.150 "bdev_delay_create", 00:09:01.150 "bdev_delay_update_latency", 00:09:01.150 "bdev_split_delete", 00:09:01.150 "bdev_split_create", 00:09:01.150 "bdev_error_inject_error", 00:09:01.150 "bdev_error_delete", 00:09:01.150 "bdev_error_create", 00:09:01.150 "bdev_raid_set_options", 00:09:01.150 "bdev_raid_remove_base_bdev", 00:09:01.150 "bdev_raid_add_base_bdev", 00:09:01.150 "bdev_raid_delete", 00:09:01.150 "bdev_raid_create", 00:09:01.150 "bdev_raid_get_bdevs", 00:09:01.150 "bdev_lvol_grow_lvstore", 00:09:01.150 "bdev_lvol_get_lvols", 00:09:01.150 "bdev_lvol_get_lvstores", 00:09:01.150 "bdev_lvol_delete", 00:09:01.150 "bdev_lvol_set_read_only", 00:09:01.150 "bdev_lvol_resize", 00:09:01.150 "bdev_lvol_decouple_parent", 00:09:01.150 "bdev_lvol_inflate", 00:09:01.150 "bdev_lvol_rename", 00:09:01.150 "bdev_lvol_clone_bdev", 00:09:01.150 "bdev_lvol_clone", 00:09:01.150 "bdev_lvol_snapshot", 00:09:01.150 "bdev_lvol_create", 00:09:01.150 "bdev_lvol_delete_lvstore", 00:09:01.150 "bdev_lvol_rename_lvstore", 00:09:01.150 "bdev_lvol_create_lvstore", 00:09:01.150 "bdev_passthru_delete", 00:09:01.150 "bdev_passthru_create", 00:09:01.150 "bdev_nvme_cuse_unregister", 00:09:01.150 "bdev_nvme_cuse_register", 00:09:01.150 "bdev_opal_new_user", 00:09:01.150 "bdev_opal_set_lock_state", 00:09:01.150 "bdev_opal_delete", 00:09:01.150 "bdev_opal_get_info", 00:09:01.150 "bdev_opal_create", 00:09:01.150 "bdev_nvme_opal_revert", 00:09:01.150 "bdev_nvme_opal_init", 00:09:01.150 "bdev_nvme_send_cmd", 00:09:01.150 "bdev_nvme_get_path_iostat", 00:09:01.150 "bdev_nvme_get_mdns_discovery_info", 00:09:01.150 "bdev_nvme_stop_mdns_discovery", 00:09:01.150 "bdev_nvme_start_mdns_discovery", 00:09:01.150 "bdev_nvme_set_multipath_policy", 00:09:01.150 "bdev_nvme_set_preferred_path", 00:09:01.150 "bdev_nvme_get_io_paths", 00:09:01.150 "bdev_nvme_remove_error_injection", 00:09:01.150 "bdev_nvme_add_error_injection", 00:09:01.150 "bdev_nvme_get_discovery_info", 00:09:01.150 "bdev_nvme_stop_discovery", 00:09:01.150 "bdev_nvme_start_discovery", 00:09:01.150 "bdev_nvme_get_controller_health_info", 00:09:01.150 "bdev_nvme_disable_controller", 00:09:01.150 "bdev_nvme_enable_controller", 00:09:01.150 "bdev_nvme_reset_controller", 00:09:01.150 "bdev_nvme_get_transport_statistics", 00:09:01.150 "bdev_nvme_apply_firmware", 00:09:01.150 "bdev_nvme_detach_controller", 00:09:01.150 "bdev_nvme_get_controllers", 00:09:01.150 "bdev_nvme_attach_controller", 00:09:01.150 "bdev_nvme_set_hotplug", 00:09:01.150 "bdev_nvme_set_options", 00:09:01.150 "bdev_null_resize", 00:09:01.150 "bdev_null_delete", 00:09:01.150 "bdev_null_create", 00:09:01.150 "bdev_malloc_delete", 00:09:01.150 "bdev_malloc_create" 00:09:01.150 ] 00:09:01.150 10:22:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:01.150 10:22:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:01.150 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:09:01.150 10:22:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:01.150 10:22:54 -- spdkcli/tcp.sh@38 -- # killprocess 106256 00:09:01.150 10:22:54 -- common/autotest_common.sh@926 -- # '[' -z 106256 ']' 00:09:01.150 10:22:54 -- common/autotest_common.sh@930 -- # kill -0 106256 00:09:01.150 10:22:54 -- common/autotest_common.sh@931 -- # uname 00:09:01.150 10:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.150 10:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106256 00:09:01.150 10:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.150 killing process with pid 106256 00:09:01.150 10:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.150 10:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106256' 00:09:01.150 10:22:54 -- common/autotest_common.sh@945 -- # kill 106256 00:09:01.150 10:22:54 -- common/autotest_common.sh@950 -- # wait 106256 00:09:03.705 00:09:03.705 real 0m4.332s 00:09:03.705 user 0m8.053s 00:09:03.705 sys 0m0.638s 00:09:03.705 ************************************ 00:09:03.705 END TEST spdkcli_tcp 00:09:03.705 ************************************ 00:09:03.705 10:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.705 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.705 10:22:57 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:03.705 10:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.705 10:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.705 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.705 ************************************ 00:09:03.705 START TEST dpdk_mem_utility 00:09:03.705 ************************************ 00:09:03.705 10:22:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:03.705 * Looking for test storage... 00:09:03.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:03.705 10:22:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:03.705 10:22:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=106382 00:09:03.705 10:22:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:03.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.705 10:22:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 106382 00:09:03.705 10:22:57 -- common/autotest_common.sh@819 -- # '[' -z 106382 ']' 00:09:03.705 10:22:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.705 10:22:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.705 10:22:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.705 10:22:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.705 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.705 [2024-07-12 10:22:57.313823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:03.705 [2024-07-12 10:22:57.314620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106382 ] 00:09:03.705 [2024-07-12 10:22:57.486974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.964 [2024-07-12 10:22:57.669506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.964 [2024-07-12 10:22:57.669783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.338 10:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:05.338 10:22:58 -- common/autotest_common.sh@852 -- # return 0 00:09:05.338 10:22:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:05.338 10:22:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:05.338 10:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.338 10:22:58 -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 { 00:09:05.338 "filename": "/tmp/spdk_mem_dump.txt" 00:09:05.338 } 00:09:05.338 10:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.338 10:22:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:05.338 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:05.338 1 heaps totaling size 820.000000 MiB 00:09:05.338 size: 820.000000 MiB heap id: 0 00:09:05.338 end heaps---------- 00:09:05.338 8 mempools totaling size 598.116089 MiB 00:09:05.338 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:05.338 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:05.338 size: 84.521057 MiB name: bdev_io_106382 00:09:05.338 size: 51.011292 MiB name: evtpool_106382 00:09:05.338 size: 50.003479 MiB name: msgpool_106382 00:09:05.338 size: 21.763794 MiB name: PDU_Pool 00:09:05.338 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:05.338 size: 0.026123 MiB name: Session_Pool 00:09:05.338 end mempools------- 00:09:05.338 6 memzones totaling size 4.142822 MiB 00:09:05.338 size: 1.000366 MiB name: RG_ring_0_106382 00:09:05.338 size: 1.000366 MiB name: RG_ring_1_106382 00:09:05.338 size: 1.000366 MiB name: RG_ring_4_106382 00:09:05.338 size: 1.000366 MiB name: RG_ring_5_106382 00:09:05.338 size: 0.125366 MiB name: RG_ring_2_106382 00:09:05.338 size: 0.015991 MiB name: RG_ring_3_106382 00:09:05.338 end memzones------- 00:09:05.338 10:22:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:05.338 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:09:05.338 list of free elements. size: 18.469727 MiB 00:09:05.338 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:05.338 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:05.338 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:05.338 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:05.338 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:05.338 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:05.338 element at address: 0x200019600000 with size: 0.999329 MiB 00:09:05.338 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:05.338 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:05.338 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:05.338 element at address: 0x200019900040 with size: 0.937256 MiB 00:09:05.338 element at address: 0x200000200000 with size: 0.835083 MiB 00:09:05.338 element at address: 0x20001b000000 with size: 0.560974 MiB 00:09:05.338 element at address: 0x200019200000 with size: 0.489197 MiB 00:09:05.338 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:05.338 element at address: 0x200013800000 with size: 0.468140 MiB 00:09:05.338 element at address: 0x200028400000 with size: 0.399963 MiB 00:09:05.338 element at address: 0x200003a00000 with size: 0.356140 MiB 00:09:05.338 list of standard malloc elements. size: 199.265869 MiB 00:09:05.338 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:05.338 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:05.338 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:05.338 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:05.338 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:05.338 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:05.338 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:05.338 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:05.339 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:09:05.339 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:09:05.339 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:05.339 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:05.339 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:05.339 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:05.340 element at address: 0x200028466640 with size: 0.000244 MiB 00:09:05.340 element at address: 0x200028466740 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846d400 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:05.340 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:05.340 list of memzone associated elements. size: 602.264404 MiB 00:09:05.340 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:05.340 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:05.340 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:05.340 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:05.340 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:05.340 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_106382_0 00:09:05.340 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:05.340 associated memzone info: size: 48.002930 MiB name: MP_evtpool_106382_0 00:09:05.340 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:05.340 associated memzone info: size: 48.002930 MiB name: MP_msgpool_106382_0 00:09:05.340 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:05.340 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:05.340 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:05.340 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:05.340 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:05.340 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_106382 00:09:05.340 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:05.340 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_106382 00:09:05.340 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:05.340 associated memzone info: size: 1.007996 MiB name: MP_evtpool_106382 00:09:05.340 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:05.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:05.340 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:05.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:05.340 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:05.340 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:05.340 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:05.340 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:05.340 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:05.340 associated memzone info: size: 1.000366 MiB name: RG_ring_0_106382 00:09:05.340 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:05.340 associated memzone info: size: 1.000366 MiB name: RG_ring_1_106382 00:09:05.340 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:05.340 associated memzone info: size: 1.000366 MiB name: RG_ring_4_106382 00:09:05.340 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:05.340 associated memzone info: size: 1.000366 MiB name: RG_ring_5_106382 00:09:05.340 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:05.340 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_106382 00:09:05.340 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:05.340 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:05.340 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:05.340 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:05.340 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:05.340 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:05.340 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:05.340 associated memzone info: size: 0.125366 MiB name: RG_ring_2_106382 00:09:05.340 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:05.340 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:05.340 element at address: 0x200028466840 with size: 0.023804 MiB 00:09:05.340 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:05.340 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:05.340 associated memzone info: size: 0.015991 MiB name: RG_ring_3_106382 00:09:05.340 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:09:05.340 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:05.340 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:09:05.340 associated memzone info: size: 0.000183 MiB name: MP_msgpool_106382 00:09:05.340 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:05.340 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_106382 00:09:05.340 element at address: 0x20002846d500 with size: 0.000366 MiB 00:09:05.340 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:05.340 10:22:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:05.340 10:22:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 106382 00:09:05.340 10:22:59 -- common/autotest_common.sh@926 -- # '[' -z 106382 ']' 00:09:05.340 10:22:59 -- common/autotest_common.sh@930 -- # kill -0 106382 00:09:05.340 10:22:59 -- common/autotest_common.sh@931 -- # uname 00:09:05.340 10:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:05.340 10:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106382 00:09:05.340 10:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:05.340 10:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:05.340 killing process with pid 106382 00:09:05.340 10:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106382' 00:09:05.340 10:22:59 -- common/autotest_common.sh@945 -- # kill 106382 00:09:05.340 10:22:59 -- common/autotest_common.sh@950 -- # wait 106382 00:09:07.239 00:09:07.239 real 0m3.710s 00:09:07.239 user 0m3.976s 00:09:07.239 sys 0m0.466s 00:09:07.239 ************************************ 00:09:07.239 END TEST dpdk_mem_utility 00:09:07.239 ************************************ 00:09:07.239 10:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.239 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:09:07.239 10:23:00 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:07.239 10:23:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.239 10:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.239 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:09:07.239 ************************************ 00:09:07.239 START TEST event 00:09:07.239 ************************************ 00:09:07.239 10:23:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:07.239 * Looking for test storage... 00:09:07.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:07.240 10:23:01 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:07.240 10:23:01 -- bdev/nbd_common.sh@6 -- # set -e 00:09:07.240 10:23:01 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:07.240 10:23:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:07.240 10:23:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.240 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:09:07.240 ************************************ 00:09:07.240 START TEST event_perf 00:09:07.240 ************************************ 00:09:07.240 10:23:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:07.240 Running I/O for 1 seconds...[2024-07-12 10:23:01.050199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:07.240 [2024-07-12 10:23:01.050366] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106496 ] 00:09:07.498 [2024-07-12 10:23:01.226521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.498 [2024-07-12 10:23:01.384397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.498 [2024-07-12 10:23:01.384535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.498 [2024-07-12 10:23:01.384606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.498 [2024-07-12 10:23:01.384608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.874 Running I/O for 1 seconds... 00:09:08.874 lcore 0: 202430 00:09:08.874 lcore 1: 202429 00:09:08.874 lcore 2: 202430 00:09:08.874 lcore 3: 202429 00:09:08.874 done. 00:09:08.874 00:09:08.874 real 0m1.689s 00:09:08.874 user 0m4.415s 00:09:08.874 sys 0m0.121s 00:09:08.874 10:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.874 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:09:08.874 ************************************ 00:09:08.874 END TEST event_perf 00:09:08.874 ************************************ 00:09:08.874 10:23:02 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:08.874 10:23:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:08.874 10:23:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.874 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:09:08.874 ************************************ 00:09:08.874 START TEST event_reactor 00:09:08.874 ************************************ 00:09:08.874 10:23:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:08.874 [2024-07-12 10:23:02.791524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:08.874 [2024-07-12 10:23:02.791852] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106554 ] 00:09:09.132 [2024-07-12 10:23:02.947697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.390 [2024-07-12 10:23:03.128258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.765 test_start 00:09:10.765 oneshot 00:09:10.765 tick 100 00:09:10.765 tick 100 00:09:10.765 tick 250 00:09:10.765 tick 100 00:09:10.765 tick 100 00:09:10.765 tick 100 00:09:10.765 tick 250 00:09:10.765 tick 500 00:09:10.765 tick 100 00:09:10.765 tick 100 00:09:10.765 tick 250 00:09:10.765 tick 100 00:09:10.765 tick 100 00:09:10.765 test_end 00:09:10.765 00:09:10.765 real 0m1.686s 00:09:10.765 user 0m1.496s 00:09:10.765 sys 0m0.089s 00:09:10.765 10:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.765 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:10.765 ************************************ 00:09:10.765 END TEST event_reactor 00:09:10.765 ************************************ 00:09:10.765 10:23:04 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:10.765 10:23:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:10.765 10:23:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.765 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:10.765 ************************************ 00:09:10.765 START TEST event_reactor_perf 00:09:10.765 ************************************ 00:09:10.765 10:23:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:10.765 [2024-07-12 10:23:04.538991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:10.765 [2024-07-12 10:23:04.539214] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106604 ] 00:09:11.024 [2024-07-12 10:23:04.706862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.024 [2024-07-12 10:23:04.891064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.400 test_start 00:09:12.400 test_end 00:09:12.400 Performance: 365277 events per second 00:09:12.400 00:09:12.400 real 0m1.749s 00:09:12.400 user 0m1.527s 00:09:12.400 sys 0m0.121s 00:09:12.400 10:23:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.400 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:09:12.400 ************************************ 00:09:12.400 END TEST event_reactor_perf 00:09:12.400 ************************************ 00:09:12.400 10:23:06 -- event/event.sh@49 -- # uname -s 00:09:12.400 10:23:06 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:12.400 10:23:06 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:12.400 10:23:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.400 10:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.400 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:09:12.400 ************************************ 00:09:12.400 START TEST event_scheduler 00:09:12.400 ************************************ 00:09:12.400 10:23:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:12.658 * Looking for test storage... 00:09:12.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:12.658 10:23:06 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:12.658 10:23:06 -- scheduler/scheduler.sh@35 -- # scheduler_pid=106680 00:09:12.659 10:23:06 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:12.659 10:23:06 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:12.659 10:23:06 -- scheduler/scheduler.sh@37 -- # waitforlisten 106680 00:09:12.659 10:23:06 -- common/autotest_common.sh@819 -- # '[' -z 106680 ']' 00:09:12.659 10:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.659 10:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:12.659 10:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.659 10:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:12.659 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:09:12.659 [2024-07-12 10:23:06.457193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:12.659 [2024-07-12 10:23:06.457460] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106680 ] 00:09:12.918 [2024-07-12 10:23:06.652892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.176 [2024-07-12 10:23:06.881880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.176 [2024-07-12 10:23:06.882000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.176 [2024-07-12 10:23:06.882133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.176 [2024-07-12 10:23:06.882265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.742 10:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.742 10:23:07 -- common/autotest_common.sh@852 -- # return 0 00:09:13.742 10:23:07 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:13.742 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.742 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:13.742 POWER: Env isn't set yet! 00:09:13.742 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:13.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:13.742 POWER: Cannot set governor of lcore 0 to userspace 00:09:13.742 POWER: Attempting to initialise PSTAT power management... 00:09:13.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:13.742 POWER: Cannot set governor of lcore 0 to performance 00:09:13.742 POWER: Attempting to initialise AMD PSTATE power management... 00:09:13.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:13.742 POWER: Cannot set governor of lcore 0 to userspace 00:09:13.742 POWER: Attempting to initialise CPPC power management... 00:09:13.742 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:13.742 POWER: Cannot set governor of lcore 0 to userspace 00:09:13.742 POWER: Attempting to initialise VM power management... 00:09:13.742 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:13.742 POWER: Unable to set Power Management Environment for lcore 0 00:09:13.742 [2024-07-12 10:23:07.418819] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:13.742 [2024-07-12 10:23:07.418942] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:13.743 [2024-07-12 10:23:07.419055] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:13.743 [2024-07-12 10:23:07.419197] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:13.743 [2024-07-12 10:23:07.419338] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:13.743 [2024-07-12 10:23:07.419487] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:13.743 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.743 10:23:07 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:13.743 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.743 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 [2024-07-12 10:23:07.676457] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:14.001 10:23:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.001 10:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 ************************************ 00:09:14.001 START TEST scheduler_create_thread 00:09:14.001 ************************************ 00:09:14.001 10:23:07 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 2 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 3 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 4 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 5 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 6 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 7 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 8 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 9 00:09:14.001 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.001 10:23:07 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:14.001 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.001 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.001 10 00:09:14.002 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.002 10:23:07 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:14.002 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.002 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.002 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.002 10:23:07 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:14.002 10:23:07 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:14.002 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.002 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.002 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.002 10:23:07 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:14.002 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.002 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:09:14.937 10:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.937 10:23:08 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:14.937 10:23:08 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:14.937 10:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.938 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.313 10:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:16.313 00:09:16.313 real 0m2.138s 00:09:16.313 user 0m0.008s 00:09:16.313 sys 0m0.002s 00:09:16.313 10:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.313 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:09:16.313 ************************************ 00:09:16.313 END TEST scheduler_create_thread 00:09:16.313 ************************************ 00:09:16.313 10:23:09 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:16.313 10:23:09 -- scheduler/scheduler.sh@46 -- # killprocess 106680 00:09:16.313 10:23:09 -- common/autotest_common.sh@926 -- # '[' -z 106680 ']' 00:09:16.313 10:23:09 -- common/autotest_common.sh@930 -- # kill -0 106680 00:09:16.313 10:23:09 -- common/autotest_common.sh@931 -- # uname 00:09:16.313 10:23:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:16.313 10:23:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106680 00:09:16.313 killing process with pid 106680 00:09:16.313 10:23:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:16.313 10:23:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:16.313 10:23:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106680' 00:09:16.313 10:23:09 -- common/autotest_common.sh@945 -- # kill 106680 00:09:16.313 10:23:09 -- common/autotest_common.sh@950 -- # wait 106680 00:09:16.570 [2024-07-12 10:23:10.309592] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:17.513 00:09:17.513 real 0m5.009s 00:09:17.513 user 0m8.354s 00:09:17.513 sys 0m0.399s 00:09:17.513 10:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.513 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 ************************************ 00:09:17.513 END TEST event_scheduler 00:09:17.513 ************************************ 00:09:17.513 10:23:11 -- event/event.sh@51 -- # modprobe -n nbd 00:09:17.513 10:23:11 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:17.513 10:23:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.513 10:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.513 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 ************************************ 00:09:17.513 START TEST app_repeat 00:09:17.513 ************************************ 00:09:17.513 10:23:11 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:17.513 10:23:11 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.513 10:23:11 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:09:17.513 10:23:11 -- event/event.sh@13 -- # local nbd_list 00:09:17.513 10:23:11 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:09:17.513 10:23:11 -- event/event.sh@14 -- # local bdev_list 00:09:17.513 10:23:11 -- event/event.sh@15 -- # local repeat_times=4 00:09:17.513 10:23:11 -- event/event.sh@17 -- # modprobe nbd 00:09:17.513 10:23:11 -- event/event.sh@19 -- # repeat_pid=106798 00:09:17.513 10:23:11 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.513 Process app_repeat pid: 106798 00:09:17.513 spdk_app_start Round 0 00:09:17.513 10:23:11 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:17.513 10:23:11 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106798' 00:09:17.513 10:23:11 -- event/event.sh@23 -- # for i in {0..2} 00:09:17.513 10:23:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:17.513 10:23:11 -- event/event.sh@25 -- # waitforlisten 106798 /var/tmp/spdk-nbd.sock 00:09:17.513 10:23:11 -- common/autotest_common.sh@819 -- # '[' -z 106798 ']' 00:09:17.513 10:23:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.513 10:23:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:17.513 10:23:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.513 10:23:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:17.513 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 [2024-07-12 10:23:11.421091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:17.513 [2024-07-12 10:23:11.421286] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106798 ] 00:09:17.770 [2024-07-12 10:23:11.591523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.029 [2024-07-12 10:23:11.760185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.029 [2024-07-12 10:23:11.760180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.594 10:23:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:18.594 10:23:12 -- common/autotest_common.sh@852 -- # return 0 00:09:18.594 10:23:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:18.852 Malloc0 00:09:18.852 10:23:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.110 Malloc1 00:09:19.110 10:23:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@12 -- # local i 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.110 10:23:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:19.366 /dev/nbd0 00:09:19.624 10:23:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:19.624 10:23:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:19.624 10:23:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:19.624 10:23:13 -- common/autotest_common.sh@857 -- # local i 00:09:19.624 10:23:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:19.624 10:23:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:19.624 10:23:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:19.624 10:23:13 -- common/autotest_common.sh@861 -- # break 00:09:19.624 10:23:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:19.624 10:23:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:19.624 10:23:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:19.624 1+0 records in 00:09:19.624 1+0 records out 00:09:19.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352337 s, 11.6 MB/s 00:09:19.624 10:23:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:19.624 10:23:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:19.624 10:23:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:19.624 10:23:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:19.624 10:23:13 -- common/autotest_common.sh@877 -- # return 0 00:09:19.624 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:19.624 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.624 10:23:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:19.882 /dev/nbd1 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:19.882 10:23:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:19.882 10:23:13 -- common/autotest_common.sh@857 -- # local i 00:09:19.882 10:23:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:19.882 10:23:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:19.882 10:23:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:19.882 10:23:13 -- common/autotest_common.sh@861 -- # break 00:09:19.882 10:23:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:19.882 10:23:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:19.882 10:23:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:19.882 1+0 records in 00:09:19.882 1+0 records out 00:09:19.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365207 s, 11.2 MB/s 00:09:19.882 10:23:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:19.882 10:23:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:19.882 10:23:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:19.882 10:23:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:19.882 10:23:13 -- common/autotest_common.sh@877 -- # return 0 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.882 10:23:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:20.140 { 00:09:20.140 "nbd_device": "/dev/nbd0", 00:09:20.140 "bdev_name": "Malloc0" 00:09:20.140 }, 00:09:20.140 { 00:09:20.140 "nbd_device": "/dev/nbd1", 00:09:20.140 "bdev_name": "Malloc1" 00:09:20.140 } 00:09:20.140 ]' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:20.140 { 00:09:20.140 "nbd_device": "/dev/nbd0", 00:09:20.140 "bdev_name": "Malloc0" 00:09:20.140 }, 00:09:20.140 { 00:09:20.140 "nbd_device": "/dev/nbd1", 00:09:20.140 "bdev_name": "Malloc1" 00:09:20.140 } 00:09:20.140 ]' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:20.140 /dev/nbd1' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:20.140 /dev/nbd1' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@65 -- # count=2 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@95 -- # count=2 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.140 256+0 records in 00:09:20.140 256+0 records out 00:09:20.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721824 s, 145 MB/s 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.140 256+0 records in 00:09:20.140 256+0 records out 00:09:20.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251219 s, 41.7 MB/s 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.140 256+0 records in 00:09:20.140 256+0 records out 00:09:20.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313073 s, 33.5 MB/s 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@51 -- # local i 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.140 10:23:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.398 10:23:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@41 -- # break 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.657 10:23:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@41 -- # break 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.915 10:23:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.173 10:23:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.173 10:23:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.173 10:23:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@65 -- # true 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.173 10:23:15 -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.173 10:23:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:21.738 10:23:15 -- event/event.sh@35 -- # sleep 3 00:09:22.670 [2024-07-12 10:23:16.573624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.928 [2024-07-12 10:23:16.783499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.928 [2024-07-12 10:23:16.783506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.187 [2024-07-12 10:23:16.966164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:23.187 [2024-07-12 10:23:16.966371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.561 spdk_app_start Round 1 00:09:24.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.561 10:23:18 -- event/event.sh@23 -- # for i in {0..2} 00:09:24.561 10:23:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:24.561 10:23:18 -- event/event.sh@25 -- # waitforlisten 106798 /var/tmp/spdk-nbd.sock 00:09:24.561 10:23:18 -- common/autotest_common.sh@819 -- # '[' -z 106798 ']' 00:09:24.561 10:23:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.561 10:23:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.561 10:23:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.561 10:23:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.561 10:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.819 10:23:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.819 10:23:18 -- common/autotest_common.sh@852 -- # return 0 00:09:24.819 10:23:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.077 Malloc0 00:09:25.077 10:23:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.335 Malloc1 00:09:25.335 10:23:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.335 10:23:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@12 -- # local i 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.336 10:23:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:25.593 /dev/nbd0 00:09:25.593 10:23:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.593 10:23:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.593 10:23:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:25.593 10:23:19 -- common/autotest_common.sh@857 -- # local i 00:09:25.593 10:23:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:25.593 10:23:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:25.593 10:23:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:25.593 10:23:19 -- common/autotest_common.sh@861 -- # break 00:09:25.593 10:23:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:25.593 10:23:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:25.593 10:23:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.593 1+0 records in 00:09:25.593 1+0 records out 00:09:25.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288704 s, 14.2 MB/s 00:09:25.593 10:23:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:25.593 10:23:19 -- common/autotest_common.sh@874 -- # size=4096 00:09:25.593 10:23:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:25.593 10:23:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:25.593 10:23:19 -- common/autotest_common.sh@877 -- # return 0 00:09:25.593 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.593 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.593 10:23:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:25.851 /dev/nbd1 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:25.851 10:23:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:25.851 10:23:19 -- common/autotest_common.sh@857 -- # local i 00:09:25.851 10:23:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:25.851 10:23:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:25.851 10:23:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:25.851 10:23:19 -- common/autotest_common.sh@861 -- # break 00:09:25.851 10:23:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:25.851 10:23:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:25.851 10:23:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.851 1+0 records in 00:09:25.851 1+0 records out 00:09:25.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322179 s, 12.7 MB/s 00:09:25.851 10:23:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:25.851 10:23:19 -- common/autotest_common.sh@874 -- # size=4096 00:09:25.851 10:23:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:25.851 10:23:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:25.851 10:23:19 -- common/autotest_common.sh@877 -- # return 0 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.851 10:23:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.109 10:23:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:26.109 { 00:09:26.109 "nbd_device": "/dev/nbd0", 00:09:26.109 "bdev_name": "Malloc0" 00:09:26.109 }, 00:09:26.109 { 00:09:26.109 "nbd_device": "/dev/nbd1", 00:09:26.109 "bdev_name": "Malloc1" 00:09:26.109 } 00:09:26.109 ]' 00:09:26.109 10:23:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:26.109 { 00:09:26.109 "nbd_device": "/dev/nbd0", 00:09:26.109 "bdev_name": "Malloc0" 00:09:26.109 }, 00:09:26.109 { 00:09:26.109 "nbd_device": "/dev/nbd1", 00:09:26.109 "bdev_name": "Malloc1" 00:09:26.109 } 00:09:26.109 ]' 00:09:26.109 10:23:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:26.109 /dev/nbd1' 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:26.109 /dev/nbd1' 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@65 -- # count=2 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@95 -- # count=2 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:26.109 256+0 records in 00:09:26.109 256+0 records out 00:09:26.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00808033 s, 130 MB/s 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.109 10:23:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:26.366 256+0 records in 00:09:26.366 256+0 records out 00:09:26.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028258 s, 37.1 MB/s 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:26.366 256+0 records in 00:09:26.366 256+0 records out 00:09:26.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305044 s, 34.4 MB/s 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@51 -- # local i 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.366 10:23:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.623 10:23:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.624 10:23:20 -- bdev/nbd_common.sh@41 -- # break 00:09:26.624 10:23:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.624 10:23:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.624 10:23:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@41 -- # break 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.881 10:23:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.139 10:23:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.139 10:23:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.139 10:23:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.139 10:23:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.139 10:23:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@65 -- # true 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@104 -- # count=0 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:27.397 10:23:21 -- bdev/nbd_common.sh@109 -- # return 0 00:09:27.397 10:23:21 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:27.655 10:23:21 -- event/event.sh@35 -- # sleep 3 00:09:29.030 [2024-07-12 10:23:22.686959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.030 [2024-07-12 10:23:22.917519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.030 [2024-07-12 10:23:22.917530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.287 [2024-07-12 10:23:23.116800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:29.287 [2024-07-12 10:23:23.116915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:30.667 spdk_app_start Round 2 00:09:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:30.667 10:23:24 -- event/event.sh@23 -- # for i in {0..2} 00:09:30.667 10:23:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:30.667 10:23:24 -- event/event.sh@25 -- # waitforlisten 106798 /var/tmp/spdk-nbd.sock 00:09:30.667 10:23:24 -- common/autotest_common.sh@819 -- # '[' -z 106798 ']' 00:09:30.667 10:23:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:30.667 10:23:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.667 10:23:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:30.667 10:23:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.667 10:23:24 -- common/autotest_common.sh@10 -- # set +x 00:09:30.931 10:23:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.931 10:23:24 -- common/autotest_common.sh@852 -- # return 0 00:09:30.931 10:23:24 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.190 Malloc0 00:09:31.190 10:23:25 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.449 Malloc1 00:09:31.449 10:23:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@12 -- # local i 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.449 10:23:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:31.708 /dev/nbd0 00:09:31.708 10:23:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.708 10:23:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.708 10:23:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:31.708 10:23:25 -- common/autotest_common.sh@857 -- # local i 00:09:31.708 10:23:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.708 10:23:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.708 10:23:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:31.708 10:23:25 -- common/autotest_common.sh@861 -- # break 00:09:31.708 10:23:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.708 10:23:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.708 10:23:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.708 1+0 records in 00:09:31.708 1+0 records out 00:09:31.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397523 s, 10.3 MB/s 00:09:31.708 10:23:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:31.708 10:23:25 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.708 10:23:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:31.708 10:23:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.708 10:23:25 -- common/autotest_common.sh@877 -- # return 0 00:09:31.708 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.708 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.708 10:23:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:31.967 /dev/nbd1 00:09:31.967 10:23:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.967 10:23:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.967 10:23:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:31.967 10:23:25 -- common/autotest_common.sh@857 -- # local i 00:09:31.967 10:23:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.967 10:23:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.967 10:23:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:31.967 10:23:25 -- common/autotest_common.sh@861 -- # break 00:09:31.967 10:23:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.967 10:23:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.967 10:23:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.967 1+0 records in 00:09:31.967 1+0 records out 00:09:31.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351543 s, 11.7 MB/s 00:09:31.967 10:23:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:31.967 10:23:25 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.967 10:23:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:31.967 10:23:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.967 10:23:25 -- common/autotest_common.sh@877 -- # return 0 00:09:31.967 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.967 10:23:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.968 10:23:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.968 10:23:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.968 10:23:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.226 10:23:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:32.226 { 00:09:32.226 "nbd_device": "/dev/nbd0", 00:09:32.226 "bdev_name": "Malloc0" 00:09:32.226 }, 00:09:32.227 { 00:09:32.227 "nbd_device": "/dev/nbd1", 00:09:32.227 "bdev_name": "Malloc1" 00:09:32.227 } 00:09:32.227 ]' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:32.227 { 00:09:32.227 "nbd_device": "/dev/nbd0", 00:09:32.227 "bdev_name": "Malloc0" 00:09:32.227 }, 00:09:32.227 { 00:09:32.227 "nbd_device": "/dev/nbd1", 00:09:32.227 "bdev_name": "Malloc1" 00:09:32.227 } 00:09:32.227 ]' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:32.227 /dev/nbd1' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:32.227 /dev/nbd1' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@65 -- # count=2 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@95 -- # count=2 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:32.227 10:23:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:32.227 256+0 records in 00:09:32.227 256+0 records out 00:09:32.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389185 s, 269 MB/s 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:32.227 256+0 records in 00:09:32.227 256+0 records out 00:09:32.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269278 s, 38.9 MB/s 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.227 256+0 records in 00:09:32.227 256+0 records out 00:09:32.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337582 s, 31.1 MB/s 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@51 -- # local i 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.227 10:23:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@41 -- # break 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.485 10:23:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.744 10:23:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@41 -- # break 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.003 10:23:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.262 10:23:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.262 10:23:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.262 10:23:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@65 -- # true 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.262 10:23:27 -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.262 10:23:27 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:33.830 10:23:27 -- event/event.sh@35 -- # sleep 3 00:09:35.232 [2024-07-12 10:23:28.724593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.232 [2024-07-12 10:23:28.966085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.232 [2024-07-12 10:23:28.966096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.490 [2024-07-12 10:23:29.172140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:35.490 [2024-07-12 10:23:29.172300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:36.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:36.867 10:23:30 -- event/event.sh@38 -- # waitforlisten 106798 /var/tmp/spdk-nbd.sock 00:09:36.867 10:23:30 -- common/autotest_common.sh@819 -- # '[' -z 106798 ']' 00:09:36.867 10:23:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:36.867 10:23:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.867 10:23:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:36.867 10:23:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.867 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 10:23:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.867 10:23:30 -- common/autotest_common.sh@852 -- # return 0 00:09:36.867 10:23:30 -- event/event.sh@39 -- # killprocess 106798 00:09:36.867 10:23:30 -- common/autotest_common.sh@926 -- # '[' -z 106798 ']' 00:09:36.867 10:23:30 -- common/autotest_common.sh@930 -- # kill -0 106798 00:09:36.867 10:23:30 -- common/autotest_common.sh@931 -- # uname 00:09:36.867 10:23:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.867 10:23:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106798 00:09:36.867 killing process with pid 106798 00:09:36.867 10:23:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:36.867 10:23:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:36.867 10:23:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106798' 00:09:36.867 10:23:30 -- common/autotest_common.sh@945 -- # kill 106798 00:09:36.867 10:23:30 -- common/autotest_common.sh@950 -- # wait 106798 00:09:38.243 spdk_app_start is called in Round 0. 00:09:38.243 Shutdown signal received, stop current app iteration 00:09:38.243 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:38.243 spdk_app_start is called in Round 1. 00:09:38.243 Shutdown signal received, stop current app iteration 00:09:38.243 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:38.243 spdk_app_start is called in Round 2. 00:09:38.243 Shutdown signal received, stop current app iteration 00:09:38.243 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:38.243 spdk_app_start is called in Round 3. 00:09:38.243 Shutdown signal received, stop current app iteration 00:09:38.243 ************************************ 00:09:38.243 END TEST app_repeat 00:09:38.243 ************************************ 00:09:38.243 10:23:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:38.243 10:23:31 -- event/event.sh@42 -- # return 0 00:09:38.243 00:09:38.243 real 0m20.530s 00:09:38.243 user 0m43.381s 00:09:38.243 sys 0m2.643s 00:09:38.243 10:23:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.243 10:23:31 -- common/autotest_common.sh@10 -- # set +x 00:09:38.243 10:23:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:38.243 10:23:31 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:38.243 10:23:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.243 10:23:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.243 10:23:31 -- common/autotest_common.sh@10 -- # set +x 00:09:38.243 ************************************ 00:09:38.243 START TEST cpu_locks 00:09:38.243 ************************************ 00:09:38.243 10:23:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:38.243 * Looking for test storage... 00:09:38.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:38.243 10:23:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:38.244 10:23:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:38.244 10:23:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:38.244 10:23:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:38.244 10:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.244 10:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.244 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.244 ************************************ 00:09:38.244 START TEST default_locks 00:09:38.244 ************************************ 00:09:38.244 10:23:32 -- common/autotest_common.sh@1104 -- # default_locks 00:09:38.244 10:23:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107375 00:09:38.244 10:23:32 -- event/cpu_locks.sh@47 -- # waitforlisten 107375 00:09:38.244 10:23:32 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:38.244 10:23:32 -- common/autotest_common.sh@819 -- # '[' -z 107375 ']' 00:09:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.244 10:23:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.244 10:23:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.244 10:23:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.244 10:23:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.244 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.244 [2024-07-12 10:23:32.117336] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:38.244 [2024-07-12 10:23:32.117560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107375 ] 00:09:38.503 [2024-07-12 10:23:32.282619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.761 [2024-07-12 10:23:32.530894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.761 [2024-07-12 10:23:32.531190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.137 10:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.137 10:23:33 -- common/autotest_common.sh@852 -- # return 0 00:09:40.137 10:23:33 -- event/cpu_locks.sh@49 -- # locks_exist 107375 00:09:40.137 10:23:33 -- event/cpu_locks.sh@22 -- # lslocks -p 107375 00:09:40.137 10:23:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:40.137 10:23:33 -- event/cpu_locks.sh@50 -- # killprocess 107375 00:09:40.137 10:23:33 -- common/autotest_common.sh@926 -- # '[' -z 107375 ']' 00:09:40.137 10:23:33 -- common/autotest_common.sh@930 -- # kill -0 107375 00:09:40.137 10:23:33 -- common/autotest_common.sh@931 -- # uname 00:09:40.137 10:23:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:40.137 10:23:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107375 00:09:40.137 10:23:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:40.137 killing process with pid 107375 00:09:40.137 10:23:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:40.137 10:23:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107375' 00:09:40.137 10:23:33 -- common/autotest_common.sh@945 -- # kill 107375 00:09:40.137 10:23:33 -- common/autotest_common.sh@950 -- # wait 107375 00:09:42.669 10:23:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107375 00:09:42.669 10:23:36 -- common/autotest_common.sh@640 -- # local es=0 00:09:42.669 10:23:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107375 00:09:42.669 10:23:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:42.669 10:23:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.669 10:23:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:42.669 10:23:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.669 10:23:36 -- common/autotest_common.sh@643 -- # waitforlisten 107375 00:09:42.669 10:23:36 -- common/autotest_common.sh@819 -- # '[' -z 107375 ']' 00:09:42.669 10:23:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.669 10:23:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.669 10:23:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.669 10:23:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.669 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:09:42.669 ERROR: process (pid: 107375) is no longer running 00:09:42.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107375) - No such process 00:09:42.669 10:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.669 10:23:36 -- common/autotest_common.sh@852 -- # return 1 00:09:42.669 10:23:36 -- common/autotest_common.sh@643 -- # es=1 00:09:42.669 10:23:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:42.669 10:23:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:42.669 10:23:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:42.669 10:23:36 -- event/cpu_locks.sh@54 -- # no_locks 00:09:42.669 10:23:36 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:42.669 10:23:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:42.669 10:23:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:42.669 00:09:42.669 real 0m4.262s 00:09:42.669 user 0m4.315s 00:09:42.669 sys 0m0.761s 00:09:42.669 ************************************ 00:09:42.669 END TEST default_locks 00:09:42.669 ************************************ 00:09:42.669 10:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.669 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:09:42.669 10:23:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:42.669 10:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:42.669 10:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.669 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:09:42.669 ************************************ 00:09:42.669 START TEST default_locks_via_rpc 00:09:42.669 ************************************ 00:09:42.669 10:23:36 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:42.669 10:23:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107481 00:09:42.669 10:23:36 -- event/cpu_locks.sh@63 -- # waitforlisten 107481 00:09:42.669 10:23:36 -- common/autotest_common.sh@819 -- # '[' -z 107481 ']' 00:09:42.669 10:23:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.669 10:23:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.669 10:23:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.669 10:23:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.669 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:09:42.669 10:23:36 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:42.669 [2024-07-12 10:23:36.430262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:42.669 [2024-07-12 10:23:36.430605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107481 ] 00:09:42.669 [2024-07-12 10:23:36.598637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.927 [2024-07-12 10:23:36.843851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:42.927 [2024-07-12 10:23:36.844085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.304 10:23:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:44.304 10:23:38 -- common/autotest_common.sh@852 -- # return 0 00:09:44.304 10:23:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:44.304 10:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.304 10:23:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.304 10:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.304 10:23:38 -- event/cpu_locks.sh@67 -- # no_locks 00:09:44.304 10:23:38 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:44.304 10:23:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:44.304 10:23:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:44.304 10:23:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:44.304 10:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.304 10:23:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.304 10:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.304 10:23:38 -- event/cpu_locks.sh@71 -- # locks_exist 107481 00:09:44.304 10:23:38 -- event/cpu_locks.sh@22 -- # lslocks -p 107481 00:09:44.304 10:23:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:44.304 10:23:38 -- event/cpu_locks.sh@73 -- # killprocess 107481 00:09:44.304 10:23:38 -- common/autotest_common.sh@926 -- # '[' -z 107481 ']' 00:09:44.304 10:23:38 -- common/autotest_common.sh@930 -- # kill -0 107481 00:09:44.304 10:23:38 -- common/autotest_common.sh@931 -- # uname 00:09:44.304 10:23:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.304 10:23:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107481 00:09:44.575 10:23:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.575 10:23:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.575 killing process with pid 107481 00:09:44.575 10:23:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107481' 00:09:44.575 10:23:38 -- common/autotest_common.sh@945 -- # kill 107481 00:09:44.575 10:23:38 -- common/autotest_common.sh@950 -- # wait 107481 00:09:47.124 00:09:47.124 real 0m4.133s 00:09:47.124 user 0m4.115s 00:09:47.124 sys 0m0.732s 00:09:47.124 10:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.124 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:47.124 ************************************ 00:09:47.124 END TEST default_locks_via_rpc 00:09:47.124 ************************************ 00:09:47.124 10:23:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:47.124 10:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:47.124 10:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.124 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:47.124 ************************************ 00:09:47.124 START TEST non_locking_app_on_locked_coremask 00:09:47.124 ************************************ 00:09:47.124 10:23:40 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:47.124 10:23:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=107568 00:09:47.124 10:23:40 -- event/cpu_locks.sh@81 -- # waitforlisten 107568 /var/tmp/spdk.sock 00:09:47.124 10:23:40 -- common/autotest_common.sh@819 -- # '[' -z 107568 ']' 00:09:47.124 10:23:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.124 10:23:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.124 10:23:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.124 10:23:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.124 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:47.124 10:23:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:47.124 [2024-07-12 10:23:40.616376] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:47.124 [2024-07-12 10:23:40.616815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107568 ] 00:09:47.124 [2024-07-12 10:23:40.779597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.124 [2024-07-12 10:23:40.962517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.124 [2024-07-12 10:23:40.962748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.496 10:23:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.496 10:23:42 -- common/autotest_common.sh@852 -- # return 0 00:09:48.496 10:23:42 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=107596 00:09:48.496 10:23:42 -- event/cpu_locks.sh@85 -- # waitforlisten 107596 /var/tmp/spdk2.sock 00:09:48.496 10:23:42 -- common/autotest_common.sh@819 -- # '[' -z 107596 ']' 00:09:48.496 10:23:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.496 10:23:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.496 10:23:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.496 10:23:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.496 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 10:23:42 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:48.496 [2024-07-12 10:23:42.238214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:48.496 [2024-07-12 10:23:42.238396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107596 ] 00:09:48.496 [2024-07-12 10:23:42.410971] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:48.496 [2024-07-12 10:23:42.411046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.060 [2024-07-12 10:23:42.767802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:49.060 [2024-07-12 10:23:42.768030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.957 10:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.957 10:23:44 -- common/autotest_common.sh@852 -- # return 0 00:09:50.957 10:23:44 -- event/cpu_locks.sh@87 -- # locks_exist 107568 00:09:50.957 10:23:44 -- event/cpu_locks.sh@22 -- # lslocks -p 107568 00:09:50.957 10:23:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:51.214 10:23:44 -- event/cpu_locks.sh@89 -- # killprocess 107568 00:09:51.214 10:23:44 -- common/autotest_common.sh@926 -- # '[' -z 107568 ']' 00:09:51.214 10:23:44 -- common/autotest_common.sh@930 -- # kill -0 107568 00:09:51.214 10:23:44 -- common/autotest_common.sh@931 -- # uname 00:09:51.214 10:23:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:51.214 10:23:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107568 00:09:51.214 10:23:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:51.214 killing process with pid 107568 00:09:51.214 10:23:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:51.214 10:23:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107568' 00:09:51.214 10:23:44 -- common/autotest_common.sh@945 -- # kill 107568 00:09:51.214 10:23:44 -- common/autotest_common.sh@950 -- # wait 107568 00:09:55.394 10:23:48 -- event/cpu_locks.sh@90 -- # killprocess 107596 00:09:55.394 10:23:48 -- common/autotest_common.sh@926 -- # '[' -z 107596 ']' 00:09:55.394 10:23:48 -- common/autotest_common.sh@930 -- # kill -0 107596 00:09:55.394 10:23:48 -- common/autotest_common.sh@931 -- # uname 00:09:55.394 10:23:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:55.394 10:23:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107596 00:09:55.394 10:23:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:55.394 10:23:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:55.394 killing process with pid 107596 00:09:55.394 10:23:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107596' 00:09:55.394 10:23:48 -- common/autotest_common.sh@945 -- # kill 107596 00:09:55.394 10:23:48 -- common/autotest_common.sh@950 -- # wait 107596 00:09:57.293 00:09:57.293 real 0m10.298s 00:09:57.293 user 0m10.858s 00:09:57.293 sys 0m1.389s 00:09:57.293 10:23:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.293 ************************************ 00:09:57.293 END TEST non_locking_app_on_locked_coremask 00:09:57.293 ************************************ 00:09:57.293 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.293 10:23:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:57.293 10:23:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:57.293 10:23:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.293 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.293 ************************************ 00:09:57.293 START TEST locking_app_on_unlocked_coremask 00:09:57.293 ************************************ 00:09:57.293 10:23:50 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:57.293 10:23:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=107759 00:09:57.293 10:23:50 -- event/cpu_locks.sh@99 -- # waitforlisten 107759 /var/tmp/spdk.sock 00:09:57.293 10:23:50 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:57.293 10:23:50 -- common/autotest_common.sh@819 -- # '[' -z 107759 ']' 00:09:57.293 10:23:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.293 10:23:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:57.293 10:23:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.293 10:23:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:57.293 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.293 [2024-07-12 10:23:50.966034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:57.293 [2024-07-12 10:23:50.966218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107759 ] 00:09:57.293 [2024-07-12 10:23:51.130526] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:57.293 [2024-07-12 10:23:51.130596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.551 [2024-07-12 10:23:51.336422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.551 [2024-07-12 10:23:51.336656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.923 10:23:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:58.923 10:23:52 -- common/autotest_common.sh@852 -- # return 0 00:09:58.923 10:23:52 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=107782 00:09:58.923 10:23:52 -- event/cpu_locks.sh@103 -- # waitforlisten 107782 /var/tmp/spdk2.sock 00:09:58.923 10:23:52 -- common/autotest_common.sh@819 -- # '[' -z 107782 ']' 00:09:58.924 10:23:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:58.924 10:23:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:58.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:58.924 10:23:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:58.924 10:23:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:58.924 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:09:58.924 10:23:52 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:58.924 [2024-07-12 10:23:52.664725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:58.924 [2024-07-12 10:23:52.665075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107782 ] 00:09:58.924 [2024-07-12 10:23:52.848914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.488 [2024-07-12 10:23:53.210859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:59.488 [2024-07-12 10:23:53.211112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.384 10:23:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:01.384 10:23:54 -- common/autotest_common.sh@852 -- # return 0 00:10:01.384 10:23:54 -- event/cpu_locks.sh@105 -- # locks_exist 107782 00:10:01.384 10:23:54 -- event/cpu_locks.sh@22 -- # lslocks -p 107782 00:10:01.384 10:23:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.642 10:23:55 -- event/cpu_locks.sh@107 -- # killprocess 107759 00:10:01.642 10:23:55 -- common/autotest_common.sh@926 -- # '[' -z 107759 ']' 00:10:01.642 10:23:55 -- common/autotest_common.sh@930 -- # kill -0 107759 00:10:01.642 10:23:55 -- common/autotest_common.sh@931 -- # uname 00:10:01.642 10:23:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:01.642 10:23:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107759 00:10:01.642 10:23:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:01.642 killing process with pid 107759 00:10:01.642 10:23:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:01.642 10:23:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107759' 00:10:01.642 10:23:55 -- common/autotest_common.sh@945 -- # kill 107759 00:10:01.642 10:23:55 -- common/autotest_common.sh@950 -- # wait 107759 00:10:05.846 10:23:59 -- event/cpu_locks.sh@108 -- # killprocess 107782 00:10:05.846 10:23:59 -- common/autotest_common.sh@926 -- # '[' -z 107782 ']' 00:10:05.846 10:23:59 -- common/autotest_common.sh@930 -- # kill -0 107782 00:10:05.846 10:23:59 -- common/autotest_common.sh@931 -- # uname 00:10:05.846 10:23:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:05.846 10:23:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107782 00:10:05.846 10:23:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:05.846 killing process with pid 107782 00:10:05.846 10:23:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:05.846 10:23:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107782' 00:10:05.846 10:23:59 -- common/autotest_common.sh@945 -- # kill 107782 00:10:05.846 10:23:59 -- common/autotest_common.sh@950 -- # wait 107782 00:10:07.219 00:10:07.219 real 0m10.041s 00:10:07.219 user 0m10.693s 00:10:07.219 sys 0m1.391s 00:10:07.219 10:24:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.219 ************************************ 00:10:07.219 END TEST locking_app_on_unlocked_coremask 00:10:07.219 ************************************ 00:10:07.219 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 10:24:00 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:07.219 10:24:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:07.219 10:24:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.219 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 ************************************ 00:10:07.219 START TEST locking_app_on_locked_coremask 00:10:07.219 ************************************ 00:10:07.219 10:24:00 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:07.219 10:24:00 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107944 00:10:07.219 10:24:00 -- event/cpu_locks.sh@116 -- # waitforlisten 107944 /var/tmp/spdk.sock 00:10:07.219 10:24:00 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.219 10:24:00 -- common/autotest_common.sh@819 -- # '[' -z 107944 ']' 00:10:07.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.219 10:24:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.219 10:24:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.219 10:24:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.219 10:24:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.219 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 [2024-07-12 10:24:01.046436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:07.219 [2024-07-12 10:24:01.046620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107944 ] 00:10:07.476 [2024-07-12 10:24:01.199261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.476 [2024-07-12 10:24:01.366071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:07.476 [2024-07-12 10:24:01.366319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.849 10:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.849 10:24:02 -- common/autotest_common.sh@852 -- # return 0 00:10:08.849 10:24:02 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107972 00:10:08.849 10:24:02 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107972 /var/tmp/spdk2.sock 00:10:08.850 10:24:02 -- common/autotest_common.sh@640 -- # local es=0 00:10:08.850 10:24:02 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107972 /var/tmp/spdk2.sock 00:10:08.850 10:24:02 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:08.850 10:24:02 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:08.850 10:24:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:08.850 10:24:02 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:08.850 10:24:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:08.850 10:24:02 -- common/autotest_common.sh@643 -- # waitforlisten 107972 /var/tmp/spdk2.sock 00:10:08.850 10:24:02 -- common/autotest_common.sh@819 -- # '[' -z 107972 ']' 00:10:08.850 10:24:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.850 10:24:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.850 10:24:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.850 10:24:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.850 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:10:09.108 [2024-07-12 10:24:02.781983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:09.108 [2024-07-12 10:24:02.782181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107972 ] 00:10:09.108 [2024-07-12 10:24:02.943844] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107944 has claimed it. 00:10:09.108 [2024-07-12 10:24:02.943956] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:09.672 ERROR: process (pid: 107972) is no longer running 00:10:09.672 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107972) - No such process 00:10:09.672 10:24:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.672 10:24:03 -- common/autotest_common.sh@852 -- # return 1 00:10:09.672 10:24:03 -- common/autotest_common.sh@643 -- # es=1 00:10:09.672 10:24:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:09.672 10:24:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:09.672 10:24:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:09.672 10:24:03 -- event/cpu_locks.sh@122 -- # locks_exist 107944 00:10:09.672 10:24:03 -- event/cpu_locks.sh@22 -- # lslocks -p 107944 00:10:09.672 10:24:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.930 10:24:03 -- event/cpu_locks.sh@124 -- # killprocess 107944 00:10:09.930 10:24:03 -- common/autotest_common.sh@926 -- # '[' -z 107944 ']' 00:10:09.930 10:24:03 -- common/autotest_common.sh@930 -- # kill -0 107944 00:10:09.930 10:24:03 -- common/autotest_common.sh@931 -- # uname 00:10:09.930 10:24:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:09.930 10:24:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107944 00:10:09.930 10:24:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:09.930 10:24:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:09.930 10:24:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107944' 00:10:09.930 killing process with pid 107944 00:10:09.930 10:24:03 -- common/autotest_common.sh@945 -- # kill 107944 00:10:09.930 10:24:03 -- common/autotest_common.sh@950 -- # wait 107944 00:10:11.829 00:10:11.829 real 0m4.558s 00:10:11.829 user 0m5.019s 00:10:11.829 sys 0m0.718s 00:10:11.829 10:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.829 ************************************ 00:10:11.829 END TEST locking_app_on_locked_coremask 00:10:11.829 ************************************ 00:10:11.829 10:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:11.829 10:24:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:11.829 10:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:11.829 10:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.829 10:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:11.829 ************************************ 00:10:11.829 START TEST locking_overlapped_coremask 00:10:11.829 ************************************ 00:10:11.829 10:24:05 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:11.829 10:24:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=108053 00:10:11.829 10:24:05 -- event/cpu_locks.sh@133 -- # waitforlisten 108053 /var/tmp/spdk.sock 00:10:11.829 10:24:05 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:11.829 10:24:05 -- common/autotest_common.sh@819 -- # '[' -z 108053 ']' 00:10:11.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.829 10:24:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.829 10:24:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.829 10:24:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.829 10:24:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.829 10:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:11.829 [2024-07-12 10:24:05.661216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:11.829 [2024-07-12 10:24:05.661396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108053 ] 00:10:12.086 [2024-07-12 10:24:05.823070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.086 [2024-07-12 10:24:06.002802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.086 [2024-07-12 10:24:06.003226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.086 [2024-07-12 10:24:06.003413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.086 [2024-07-12 10:24:06.003408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.461 10:24:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:13.461 10:24:07 -- common/autotest_common.sh@852 -- # return 0 00:10:13.461 10:24:07 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=108090 00:10:13.461 10:24:07 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 108090 /var/tmp/spdk2.sock 00:10:13.461 10:24:07 -- common/autotest_common.sh@640 -- # local es=0 00:10:13.461 10:24:07 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 108090 /var/tmp/spdk2.sock 00:10:13.461 10:24:07 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:13.461 10:24:07 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:13.461 10:24:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.461 10:24:07 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:13.461 10:24:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.461 10:24:07 -- common/autotest_common.sh@643 -- # waitforlisten 108090 /var/tmp/spdk2.sock 00:10:13.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.461 10:24:07 -- common/autotest_common.sh@819 -- # '[' -z 108090 ']' 00:10:13.461 10:24:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.461 10:24:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.461 10:24:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.461 10:24:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.461 10:24:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.720 [2024-07-12 10:24:07.395828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:13.720 [2024-07-12 10:24:07.396010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108090 ] 00:10:13.720 [2024-07-12 10:24:07.593989] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 108053 has claimed it. 00:10:13.720 [2024-07-12 10:24:07.594137] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:14.287 ERROR: process (pid: 108090) is no longer running 00:10:14.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (108090) - No such process 00:10:14.287 10:24:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.287 10:24:08 -- common/autotest_common.sh@852 -- # return 1 00:10:14.287 10:24:08 -- common/autotest_common.sh@643 -- # es=1 00:10:14.287 10:24:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:14.287 10:24:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:14.287 10:24:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:14.287 10:24:08 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:14.287 10:24:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:14.287 10:24:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:14.287 10:24:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:14.287 10:24:08 -- event/cpu_locks.sh@141 -- # killprocess 108053 00:10:14.287 10:24:08 -- common/autotest_common.sh@926 -- # '[' -z 108053 ']' 00:10:14.287 10:24:08 -- common/autotest_common.sh@930 -- # kill -0 108053 00:10:14.287 10:24:08 -- common/autotest_common.sh@931 -- # uname 00:10:14.287 10:24:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:14.287 10:24:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108053 00:10:14.287 10:24:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:14.287 10:24:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:14.287 10:24:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108053' 00:10:14.287 killing process with pid 108053 00:10:14.287 10:24:08 -- common/autotest_common.sh@945 -- # kill 108053 00:10:14.287 10:24:08 -- common/autotest_common.sh@950 -- # wait 108053 00:10:16.819 00:10:16.819 real 0m4.628s 00:10:16.819 user 0m12.707s 00:10:16.819 sys 0m0.579s 00:10:16.819 10:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.819 ************************************ 00:10:16.819 END TEST locking_overlapped_coremask 00:10:16.819 ************************************ 00:10:16.819 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:10:16.819 10:24:10 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:16.819 10:24:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:16.819 10:24:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.819 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:10:16.819 ************************************ 00:10:16.819 START TEST locking_overlapped_coremask_via_rpc 00:10:16.819 ************************************ 00:10:16.819 10:24:10 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:16.819 10:24:10 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=108159 00:10:16.819 10:24:10 -- event/cpu_locks.sh@149 -- # waitforlisten 108159 /var/tmp/spdk.sock 00:10:16.819 10:24:10 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:16.819 10:24:10 -- common/autotest_common.sh@819 -- # '[' -z 108159 ']' 00:10:16.819 10:24:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.819 10:24:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.819 10:24:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.819 10:24:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.819 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:10:16.819 [2024-07-12 10:24:10.357348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:16.819 [2024-07-12 10:24:10.357768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108159 ] 00:10:16.819 [2024-07-12 10:24:10.533167] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.819 [2024-07-12 10:24:10.533239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.819 [2024-07-12 10:24:10.732262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.819 [2024-07-12 10:24:10.732915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.819 [2024-07-12 10:24:10.733221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.819 [2024-07-12 10:24:10.733245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.194 10:24:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.194 10:24:12 -- common/autotest_common.sh@852 -- # return 0 00:10:18.194 10:24:12 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=108191 00:10:18.194 10:24:12 -- event/cpu_locks.sh@153 -- # waitforlisten 108191 /var/tmp/spdk2.sock 00:10:18.194 10:24:12 -- common/autotest_common.sh@819 -- # '[' -z 108191 ']' 00:10:18.194 10:24:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:18.194 10:24:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.194 10:24:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:18.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:18.194 10:24:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.194 10:24:12 -- common/autotest_common.sh@10 -- # set +x 00:10:18.194 10:24:12 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:18.194 [2024-07-12 10:24:12.097178] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:18.194 [2024-07-12 10:24:12.097372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108191 ] 00:10:18.452 [2024-07-12 10:24:12.276075] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:18.452 [2024-07-12 10:24:12.276161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.018 [2024-07-12 10:24:12.741738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.018 [2024-07-12 10:24:12.742381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.018 [2024-07-12 10:24:12.755654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.018 [2024-07-12 10:24:12.755655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.922 10:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.922 10:24:14 -- common/autotest_common.sh@852 -- # return 0 00:10:20.922 10:24:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:20.922 10:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.922 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:20.922 10:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.922 10:24:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:20.922 10:24:14 -- common/autotest_common.sh@640 -- # local es=0 00:10:20.922 10:24:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:20.922 10:24:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:20.922 10:24:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:20.922 10:24:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:20.922 10:24:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:20.922 10:24:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:20.922 10:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.922 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:20.922 [2024-07-12 10:24:14.495565] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 108159 has claimed it. 00:10:20.922 request: 00:10:20.922 { 00:10:20.922 "method": "framework_enable_cpumask_locks", 00:10:20.922 "req_id": 1 00:10:20.922 } 00:10:20.922 Got JSON-RPC error response 00:10:20.922 response: 00:10:20.922 { 00:10:20.922 "code": -32603, 00:10:20.922 "message": "Failed to claim CPU core: 2" 00:10:20.922 } 00:10:20.922 10:24:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:20.922 10:24:14 -- common/autotest_common.sh@643 -- # es=1 00:10:20.922 10:24:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:20.922 10:24:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:20.922 10:24:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:20.922 10:24:14 -- event/cpu_locks.sh@158 -- # waitforlisten 108159 /var/tmp/spdk.sock 00:10:20.922 10:24:14 -- common/autotest_common.sh@819 -- # '[' -z 108159 ']' 00:10:20.922 10:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.922 10:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.922 10:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.922 10:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.922 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:20.922 10:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.922 10:24:14 -- common/autotest_common.sh@852 -- # return 0 00:10:20.922 10:24:14 -- event/cpu_locks.sh@159 -- # waitforlisten 108191 /var/tmp/spdk2.sock 00:10:20.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.922 10:24:14 -- common/autotest_common.sh@819 -- # '[' -z 108191 ']' 00:10:20.922 10:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.922 10:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.922 10:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.922 10:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.922 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:21.180 10:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.180 10:24:14 -- common/autotest_common.sh@852 -- # return 0 00:10:21.180 10:24:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:21.180 10:24:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:21.180 10:24:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:21.180 10:24:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:21.180 00:10:21.180 real 0m4.691s 00:10:21.180 user 0m1.861s 00:10:21.180 sys 0m0.270s 00:10:21.180 10:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.180 ************************************ 00:10:21.180 END TEST locking_overlapped_coremask_via_rpc 00:10:21.180 ************************************ 00:10:21.180 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:10:21.180 10:24:15 -- event/cpu_locks.sh@174 -- # cleanup 00:10:21.180 10:24:15 -- event/cpu_locks.sh@15 -- # [[ -z 108159 ]] 00:10:21.180 10:24:15 -- event/cpu_locks.sh@15 -- # killprocess 108159 00:10:21.180 10:24:15 -- common/autotest_common.sh@926 -- # '[' -z 108159 ']' 00:10:21.180 10:24:15 -- common/autotest_common.sh@930 -- # kill -0 108159 00:10:21.180 10:24:15 -- common/autotest_common.sh@931 -- # uname 00:10:21.180 10:24:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.180 10:24:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108159 00:10:21.180 10:24:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:21.180 killing process with pid 108159 00:10:21.180 10:24:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:21.180 10:24:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108159' 00:10:21.180 10:24:15 -- common/autotest_common.sh@945 -- # kill 108159 00:10:21.180 10:24:15 -- common/autotest_common.sh@950 -- # wait 108159 00:10:23.712 10:24:17 -- event/cpu_locks.sh@16 -- # [[ -z 108191 ]] 00:10:23.712 10:24:17 -- event/cpu_locks.sh@16 -- # killprocess 108191 00:10:23.712 10:24:17 -- common/autotest_common.sh@926 -- # '[' -z 108191 ']' 00:10:23.712 10:24:17 -- common/autotest_common.sh@930 -- # kill -0 108191 00:10:23.712 10:24:17 -- common/autotest_common.sh@931 -- # uname 00:10:23.712 10:24:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:23.712 10:24:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108191 00:10:23.712 10:24:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:23.712 killing process with pid 108191 00:10:23.712 10:24:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:23.712 10:24:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108191' 00:10:23.712 10:24:17 -- common/autotest_common.sh@945 -- # kill 108191 00:10:23.712 10:24:17 -- common/autotest_common.sh@950 -- # wait 108191 00:10:25.615 10:24:19 -- event/cpu_locks.sh@18 -- # rm -f 00:10:25.615 10:24:19 -- event/cpu_locks.sh@1 -- # cleanup 00:10:25.615 10:24:19 -- event/cpu_locks.sh@15 -- # [[ -z 108159 ]] 00:10:25.615 10:24:19 -- event/cpu_locks.sh@15 -- # killprocess 108159 00:10:25.615 10:24:19 -- common/autotest_common.sh@926 -- # '[' -z 108159 ']' 00:10:25.615 10:24:19 -- common/autotest_common.sh@930 -- # kill -0 108159 00:10:25.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (108159) - No such process 00:10:25.615 10:24:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 108159 is not found' 00:10:25.615 Process with pid 108159 is not found 00:10:25.615 10:24:19 -- event/cpu_locks.sh@16 -- # [[ -z 108191 ]] 00:10:25.615 10:24:19 -- event/cpu_locks.sh@16 -- # killprocess 108191 00:10:25.615 10:24:19 -- common/autotest_common.sh@926 -- # '[' -z 108191 ']' 00:10:25.615 10:24:19 -- common/autotest_common.sh@930 -- # kill -0 108191 00:10:25.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (108191) - No such process 00:10:25.615 Process with pid 108191 is not found 00:10:25.615 10:24:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 108191 is not found' 00:10:25.615 10:24:19 -- event/cpu_locks.sh@18 -- # rm -f 00:10:25.615 00:10:25.615 real 0m47.268s 00:10:25.615 user 1m22.229s 00:10:25.615 sys 0m7.049s 00:10:25.615 10:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.615 10:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:25.615 ************************************ 00:10:25.615 END TEST cpu_locks 00:10:25.615 ************************************ 00:10:25.615 00:10:25.615 real 1m18.330s 00:10:25.615 user 2m21.614s 00:10:25.615 sys 0m10.574s 00:10:25.615 10:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.615 10:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:25.615 ************************************ 00:10:25.615 END TEST event 00:10:25.615 ************************************ 00:10:25.615 10:24:19 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:25.615 10:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:25.615 10:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.615 10:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:25.615 ************************************ 00:10:25.615 START TEST thread 00:10:25.615 ************************************ 00:10:25.615 10:24:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:25.615 * Looking for test storage... 00:10:25.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:25.615 10:24:19 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:25.615 10:24:19 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:25.615 10:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.615 10:24:19 -- common/autotest_common.sh@10 -- # set +x 00:10:25.615 ************************************ 00:10:25.615 START TEST thread_poller_perf 00:10:25.615 ************************************ 00:10:25.615 10:24:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:25.615 [2024-07-12 10:24:19.446403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:25.615 [2024-07-12 10:24:19.446774] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108398 ] 00:10:25.873 [2024-07-12 10:24:19.617686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.130 [2024-07-12 10:24:19.860815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.130 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:27.506 ====================================== 00:10:27.506 busy:2208992940 (cyc) 00:10:27.506 total_run_count: 356000 00:10:27.506 tsc_hz: 2200000000 (cyc) 00:10:27.506 ====================================== 00:10:27.506 poller_cost: 6205 (cyc), 2820 (nsec) 00:10:27.506 00:10:27.506 real 0m1.831s 00:10:27.506 user 0m1.602s 00:10:27.506 sys 0m0.128s 00:10:27.506 10:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.506 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.506 ************************************ 00:10:27.506 END TEST thread_poller_perf 00:10:27.506 ************************************ 00:10:27.506 10:24:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:27.506 10:24:21 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:27.506 10:24:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.506 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.506 ************************************ 00:10:27.506 START TEST thread_poller_perf 00:10:27.506 ************************************ 00:10:27.506 10:24:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:27.506 [2024-07-12 10:24:21.332459] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:27.506 [2024-07-12 10:24:21.332791] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108448 ] 00:10:27.762 [2024-07-12 10:24:21.499727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.020 [2024-07-12 10:24:21.700280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.020 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:29.393 ====================================== 00:10:29.393 busy:2204589592 (cyc) 00:10:29.393 total_run_count: 4480000 00:10:29.393 tsc_hz: 2200000000 (cyc) 00:10:29.393 ====================================== 00:10:29.393 poller_cost: 492 (cyc), 223 (nsec) 00:10:29.393 00:10:29.393 real 0m1.778s 00:10:29.393 user 0m1.573s 00:10:29.393 sys 0m0.104s 00:10:29.393 10:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.393 10:24:23 -- common/autotest_common.sh@10 -- # set +x 00:10:29.393 ************************************ 00:10:29.393 END TEST thread_poller_perf 00:10:29.393 ************************************ 00:10:29.393 10:24:23 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:29.393 10:24:23 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:29.393 10:24:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:29.393 10:24:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.393 10:24:23 -- common/autotest_common.sh@10 -- # set +x 00:10:29.393 ************************************ 00:10:29.393 START TEST thread_spdk_lock 00:10:29.393 ************************************ 00:10:29.393 10:24:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:29.393 [2024-07-12 10:24:23.167673] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:29.393 [2024-07-12 10:24:23.167869] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108486 ] 00:10:29.651 [2024-07-12 10:24:23.337912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.651 [2024-07-12 10:24:23.523796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.651 [2024-07-12 10:24:23.523811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.584 [2024-07-12 10:24:24.202043] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:30.584 [2024-07-12 10:24:24.202187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:30.584 [2024-07-12 10:24:24.202232] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55eb36932840 00:10:30.584 [2024-07-12 10:24:24.209157] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:30.585 [2024-07-12 10:24:24.209259] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:30.585 [2024-07-12 10:24:24.209301] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:30.843 Starting test contend 00:10:30.843 Worker Delay Wait us Hold us Total us 00:10:30.843 0 3 118751 209160 327912 00:10:30.843 1 5 29976 321470 351446 00:10:30.843 PASS test contend 00:10:30.843 Starting test hold_by_poller 00:10:30.843 PASS test hold_by_poller 00:10:30.843 Starting test hold_by_message 00:10:30.843 PASS test hold_by_message 00:10:30.843 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:30.843 100014 assertions passed 00:10:30.843 0 assertions failed 00:10:30.843 00:10:30.843 real 0m1.451s 00:10:30.843 user 0m1.926s 00:10:30.843 sys 0m0.113s 00:10:30.843 10:24:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.843 ************************************ 00:10:30.843 END TEST thread_spdk_lock 00:10:30.843 ************************************ 00:10:30.843 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:30.843 00:10:30.843 real 0m5.299s 00:10:30.843 user 0m5.230s 00:10:30.843 sys 0m0.442s 00:10:30.843 10:24:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.843 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:30.843 ************************************ 00:10:30.843 END TEST thread 00:10:30.843 ************************************ 00:10:30.843 10:24:24 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:30.844 10:24:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:30.844 10:24:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.844 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:30.844 ************************************ 00:10:30.844 START TEST accel 00:10:30.844 ************************************ 00:10:30.844 10:24:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:30.844 * Looking for test storage... 00:10:30.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:30.844 10:24:24 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:30.844 10:24:24 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:30.844 10:24:24 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:30.844 10:24:24 -- accel/accel.sh@59 -- # spdk_tgt_pid=108593 00:10:30.844 10:24:24 -- accel/accel.sh@60 -- # waitforlisten 108593 00:10:30.844 10:24:24 -- common/autotest_common.sh@819 -- # '[' -z 108593 ']' 00:10:30.844 10:24:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.844 10:24:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:30.844 10:24:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.844 10:24:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:30.844 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:30.844 10:24:24 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:30.844 10:24:24 -- accel/accel.sh@58 -- # build_accel_config 00:10:30.844 10:24:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.844 10:24:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.844 10:24:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.844 10:24:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.844 10:24:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.844 10:24:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.844 10:24:24 -- accel/accel.sh@42 -- # jq -r . 00:10:31.102 [2024-07-12 10:24:24.817066] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:31.102 [2024-07-12 10:24:24.817269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108593 ] 00:10:31.102 [2024-07-12 10:24:24.979611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.361 [2024-07-12 10:24:25.170422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:31.361 [2024-07-12 10:24:25.170657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.736 10:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:32.736 10:24:26 -- common/autotest_common.sh@852 -- # return 0 00:10:32.736 10:24:26 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:32.736 10:24:26 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:32.736 10:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:32.736 10:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:32.736 10:24:26 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:32.736 10:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # IFS== 00:10:32.736 10:24:26 -- accel/accel.sh@64 -- # read -r opc module 00:10:32.736 10:24:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:32.736 10:24:26 -- accel/accel.sh@67 -- # killprocess 108593 00:10:32.736 10:24:26 -- common/autotest_common.sh@926 -- # '[' -z 108593 ']' 00:10:32.736 10:24:26 -- common/autotest_common.sh@930 -- # kill -0 108593 00:10:32.736 10:24:26 -- common/autotest_common.sh@931 -- # uname 00:10:32.736 10:24:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:32.736 10:24:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108593 00:10:32.736 10:24:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:32.736 killing process with pid 108593 00:10:32.736 10:24:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:32.736 10:24:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108593' 00:10:32.736 10:24:26 -- common/autotest_common.sh@945 -- # kill 108593 00:10:32.736 10:24:26 -- common/autotest_common.sh@950 -- # wait 108593 00:10:34.639 10:24:28 -- accel/accel.sh@68 -- # trap - ERR 00:10:34.639 10:24:28 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:34.639 10:24:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:34.639 10:24:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.639 10:24:28 -- common/autotest_common.sh@10 -- # set +x 00:10:34.639 10:24:28 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:34.639 10:24:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:34.639 10:24:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.639 10:24:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.639 10:24:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.639 10:24:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.640 10:24:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.640 10:24:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.640 10:24:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.640 10:24:28 -- accel/accel.sh@42 -- # jq -r . 00:10:34.640 10:24:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.640 10:24:28 -- common/autotest_common.sh@10 -- # set +x 00:10:34.640 10:24:28 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:34.640 10:24:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:34.640 10:24:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.640 10:24:28 -- common/autotest_common.sh@10 -- # set +x 00:10:34.640 ************************************ 00:10:34.640 START TEST accel_missing_filename 00:10:34.640 ************************************ 00:10:34.640 10:24:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:34.640 10:24:28 -- common/autotest_common.sh@640 -- # local es=0 00:10:34.640 10:24:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:34.640 10:24:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:34.640 10:24:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.640 10:24:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:34.640 10:24:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.640 10:24:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:34.640 10:24:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:34.640 10:24:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.640 10:24:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.640 10:24:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.640 10:24:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.640 10:24:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.640 10:24:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.640 10:24:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.640 10:24:28 -- accel/accel.sh@42 -- # jq -r . 00:10:34.640 [2024-07-12 10:24:28.534181] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:34.640 [2024-07-12 10:24:28.534385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108684 ] 00:10:34.898 [2024-07-12 10:24:28.705220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.156 [2024-07-12 10:24:28.866685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.156 [2024-07-12 10:24:29.036219] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.720 [2024-07-12 10:24:29.450631] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:35.979 A filename is required. 00:10:35.979 10:24:29 -- common/autotest_common.sh@643 -- # es=234 00:10:35.979 10:24:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:35.979 10:24:29 -- common/autotest_common.sh@652 -- # es=106 00:10:35.979 10:24:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:35.979 10:24:29 -- common/autotest_common.sh@660 -- # es=1 00:10:35.979 10:24:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:35.979 00:10:35.979 real 0m1.306s 00:10:35.979 user 0m1.071s 00:10:35.979 ************************************ 00:10:35.979 sys 0m0.192s 00:10:35.979 10:24:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.979 10:24:29 -- common/autotest_common.sh@10 -- # set +x 00:10:35.979 END TEST accel_missing_filename 00:10:35.979 ************************************ 00:10:35.979 10:24:29 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.979 10:24:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:35.979 10:24:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.979 10:24:29 -- common/autotest_common.sh@10 -- # set +x 00:10:35.979 ************************************ 00:10:35.979 START TEST accel_compress_verify 00:10:35.979 ************************************ 00:10:35.979 10:24:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.979 10:24:29 -- common/autotest_common.sh@640 -- # local es=0 00:10:35.979 10:24:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.979 10:24:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:35.979 10:24:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.979 10:24:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:35.979 10:24:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.979 10:24:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.979 10:24:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.979 10:24:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.979 10:24:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.979 10:24:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.979 10:24:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.979 10:24:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.979 10:24:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.979 10:24:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.979 10:24:29 -- accel/accel.sh@42 -- # jq -r . 00:10:35.979 [2024-07-12 10:24:29.886490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:35.979 [2024-07-12 10:24:29.886704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108728 ] 00:10:36.245 [2024-07-12 10:24:30.053133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.512 [2024-07-12 10:24:30.216387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.512 [2024-07-12 10:24:30.396044] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.079 [2024-07-12 10:24:30.800459] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:37.338 00:10:37.338 Compression does not support the verify option, aborting. 00:10:37.338 10:24:31 -- common/autotest_common.sh@643 -- # es=161 00:10:37.338 10:24:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.338 10:24:31 -- common/autotest_common.sh@652 -- # es=33 00:10:37.338 10:24:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:37.338 10:24:31 -- common/autotest_common.sh@660 -- # es=1 00:10:37.338 10:24:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.338 00:10:37.338 real 0m1.285s 00:10:37.338 user 0m1.051s 00:10:37.338 sys 0m0.178s 00:10:37.338 10:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.338 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.338 ************************************ 00:10:37.338 END TEST accel_compress_verify 00:10:37.338 ************************************ 00:10:37.338 10:24:31 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:37.338 10:24:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:37.338 10:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.338 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.338 ************************************ 00:10:37.338 START TEST accel_wrong_workload 00:10:37.338 ************************************ 00:10:37.338 10:24:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:37.338 10:24:31 -- common/autotest_common.sh@640 -- # local es=0 00:10:37.338 10:24:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:37.338 10:24:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:37.338 10:24:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.338 10:24:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:37.338 10:24:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.338 10:24:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:37.338 10:24:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:37.338 10:24:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.338 10:24:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.338 10:24:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.338 10:24:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.338 10:24:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.338 10:24:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.338 10:24:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.338 10:24:31 -- accel/accel.sh@42 -- # jq -r . 00:10:37.338 Unsupported workload type: foobar 00:10:37.338 [2024-07-12 10:24:31.226911] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:37.338 accel_perf options: 00:10:37.338 [-h help message] 00:10:37.338 [-q queue depth per core] 00:10:37.338 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:37.338 [-T number of threads per core 00:10:37.338 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:37.338 [-t time in seconds] 00:10:37.338 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:37.338 [ dif_verify, , dif_generate, dif_generate_copy 00:10:37.338 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:37.338 [-l for compress/decompress workloads, name of uncompressed input file 00:10:37.338 [-S for crc32c workload, use this seed value (default 0) 00:10:37.338 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:37.338 [-f for fill workload, use this BYTE value (default 255) 00:10:37.338 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:37.338 [-y verify result if this switch is on] 00:10:37.338 [-a tasks to allocate per core (default: same value as -q)] 00:10:37.338 Can be used to spread operations across a wider range of memory. 00:10:37.338 10:24:31 -- common/autotest_common.sh@643 -- # es=1 00:10:37.338 10:24:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.338 10:24:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:37.338 10:24:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.338 00:10:37.338 real 0m0.070s 00:10:37.338 user 0m0.080s 00:10:37.338 sys 0m0.048s 00:10:37.338 ************************************ 00:10:37.338 END TEST accel_wrong_workload 00:10:37.338 ************************************ 00:10:37.338 10:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.338 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.597 10:24:31 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:37.597 10:24:31 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:37.597 10:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.597 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.597 ************************************ 00:10:37.597 START TEST accel_negative_buffers 00:10:37.597 ************************************ 00:10:37.597 10:24:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:37.597 10:24:31 -- common/autotest_common.sh@640 -- # local es=0 00:10:37.597 10:24:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:37.597 10:24:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:37.597 10:24:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.597 10:24:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:37.597 10:24:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.597 10:24:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:37.597 10:24:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:37.597 10:24:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.597 10:24:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.597 10:24:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.597 10:24:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.597 10:24:31 -- accel/accel.sh@42 -- # jq -r . 00:10:37.597 -x option must be non-negative. 00:10:37.597 [2024-07-12 10:24:31.335092] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:37.597 accel_perf options: 00:10:37.597 [-h help message] 00:10:37.597 [-q queue depth per core] 00:10:37.597 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:37.597 [-T number of threads per core 00:10:37.597 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:37.597 [-t time in seconds] 00:10:37.597 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:37.597 [ dif_verify, , dif_generate, dif_generate_copy 00:10:37.597 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:37.597 [-l for compress/decompress workloads, name of uncompressed input file 00:10:37.597 [-S for crc32c workload, use this seed value (default 0) 00:10:37.597 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:37.597 [-f for fill workload, use this BYTE value (default 255) 00:10:37.597 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:37.597 [-y verify result if this switch is on] 00:10:37.597 [-a tasks to allocate per core (default: same value as -q)] 00:10:37.597 Can be used to spread operations across a wider range of memory. 00:10:37.597 10:24:31 -- common/autotest_common.sh@643 -- # es=1 00:10:37.597 10:24:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.597 10:24:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:37.597 10:24:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.597 00:10:37.597 real 0m0.057s 00:10:37.597 user 0m0.033s 00:10:37.597 sys 0m0.024s 00:10:37.597 10:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.597 ************************************ 00:10:37.597 END TEST accel_negative_buffers 00:10:37.597 ************************************ 00:10:37.597 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.597 10:24:31 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:37.597 10:24:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:37.597 10:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.597 10:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.597 ************************************ 00:10:37.597 START TEST accel_crc32c 00:10:37.597 ************************************ 00:10:37.597 10:24:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:37.597 10:24:31 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.597 10:24:31 -- accel/accel.sh@17 -- # local accel_module 00:10:37.597 10:24:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:37.597 10:24:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:37.597 10:24:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.597 10:24:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.597 10:24:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.597 10:24:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.597 10:24:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.597 10:24:31 -- accel/accel.sh@42 -- # jq -r . 00:10:37.597 [2024-07-12 10:24:31.451244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:37.597 [2024-07-12 10:24:31.451452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108820 ] 00:10:37.856 [2024-07-12 10:24:31.620287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.115 [2024-07-12 10:24:31.797195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.017 10:24:33 -- accel/accel.sh@18 -- # out=' 00:10:40.017 SPDK Configuration: 00:10:40.017 Core mask: 0x1 00:10:40.017 00:10:40.017 Accel Perf Configuration: 00:10:40.017 Workload Type: crc32c 00:10:40.017 CRC-32C seed: 32 00:10:40.017 Transfer size: 4096 bytes 00:10:40.017 Vector count 1 00:10:40.017 Module: software 00:10:40.017 Queue depth: 32 00:10:40.017 Allocate depth: 32 00:10:40.017 # threads/core: 1 00:10:40.017 Run time: 1 seconds 00:10:40.017 Verify: Yes 00:10:40.017 00:10:40.017 Running for 1 seconds... 00:10:40.017 00:10:40.017 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.017 ------------------------------------------------------------------------------------ 00:10:40.017 0,0 502272/s 1962 MiB/s 0 0 00:10:40.017 ==================================================================================== 00:10:40.017 Total 502272/s 1962 MiB/s 0 0' 00:10:40.017 10:24:33 -- accel/accel.sh@20 -- # IFS=: 00:10:40.017 10:24:33 -- accel/accel.sh@20 -- # read -r var val 00:10:40.017 10:24:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:40.017 10:24:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:40.017 10:24:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.017 10:24:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.017 10:24:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.017 10:24:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.017 10:24:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.017 10:24:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.017 10:24:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.017 10:24:33 -- accel/accel.sh@42 -- # jq -r . 00:10:40.017 [2024-07-12 10:24:33.774279] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:40.017 [2024-07-12 10:24:33.774469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108854 ] 00:10:40.017 [2024-07-12 10:24:33.940242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.276 [2024-07-12 10:24:34.119065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.534 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.534 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.534 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.534 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.534 10:24:34 -- accel/accel.sh@21 -- # val=0x1 00:10:40.534 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.534 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.534 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=crc32c 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=32 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=software 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=32 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=32 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=1 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val=Yes 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:40.535 10:24:34 -- accel/accel.sh@21 -- # val= 00:10:40.535 10:24:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # IFS=: 00:10:40.535 10:24:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@21 -- # val= 00:10:42.436 10:24:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # IFS=: 00:10:42.436 10:24:36 -- accel/accel.sh@20 -- # read -r var val 00:10:42.436 10:24:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.436 10:24:36 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:42.436 10:24:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.436 00:10:42.436 real 0m4.706s 00:10:42.436 user 0m4.221s 00:10:42.436 sys 0m0.345s 00:10:42.436 ************************************ 00:10:42.436 END TEST accel_crc32c 00:10:42.436 ************************************ 00:10:42.436 10:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.436 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:10:42.436 10:24:36 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:42.436 10:24:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:42.436 10:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.436 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:10:42.436 ************************************ 00:10:42.436 START TEST accel_crc32c_C2 00:10:42.436 ************************************ 00:10:42.436 10:24:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:42.436 10:24:36 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.436 10:24:36 -- accel/accel.sh@17 -- # local accel_module 00:10:42.436 10:24:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:42.436 10:24:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:42.436 10:24:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.436 10:24:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.436 10:24:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.436 10:24:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.436 10:24:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.436 10:24:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.436 10:24:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.436 10:24:36 -- accel/accel.sh@42 -- # jq -r . 00:10:42.436 [2024-07-12 10:24:36.210297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:42.436 [2024-07-12 10:24:36.210530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108917 ] 00:10:42.695 [2024-07-12 10:24:36.376892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.695 [2024-07-12 10:24:36.569368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.598 10:24:38 -- accel/accel.sh@18 -- # out=' 00:10:44.598 SPDK Configuration: 00:10:44.598 Core mask: 0x1 00:10:44.598 00:10:44.598 Accel Perf Configuration: 00:10:44.598 Workload Type: crc32c 00:10:44.598 CRC-32C seed: 0 00:10:44.598 Transfer size: 4096 bytes 00:10:44.598 Vector count 2 00:10:44.598 Module: software 00:10:44.598 Queue depth: 32 00:10:44.598 Allocate depth: 32 00:10:44.598 # threads/core: 1 00:10:44.598 Run time: 1 seconds 00:10:44.598 Verify: Yes 00:10:44.598 00:10:44.598 Running for 1 seconds... 00:10:44.598 00:10:44.598 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.598 ------------------------------------------------------------------------------------ 00:10:44.598 0,0 382496/s 2988 MiB/s 0 0 00:10:44.598 ==================================================================================== 00:10:44.598 Total 382496/s 1494 MiB/s 0 0' 00:10:44.598 10:24:38 -- accel/accel.sh@20 -- # IFS=: 00:10:44.598 10:24:38 -- accel/accel.sh@20 -- # read -r var val 00:10:44.598 10:24:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:44.598 10:24:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:44.598 10:24:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.598 10:24:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.598 10:24:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.598 10:24:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.598 10:24:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.598 10:24:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.598 10:24:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.598 10:24:38 -- accel/accel.sh@42 -- # jq -r . 00:10:44.857 [2024-07-12 10:24:38.533757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:44.857 [2024-07-12 10:24:38.533977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108963 ] 00:10:44.857 [2024-07-12 10:24:38.702727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.116 [2024-07-12 10:24:38.879591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=0x1 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=crc32c 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=0 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=software 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@23 -- # accel_module=software 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=32 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=32 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=1 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val=Yes 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:45.375 10:24:39 -- accel/accel.sh@21 -- # val= 00:10:45.375 10:24:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # IFS=: 00:10:45.375 10:24:39 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@21 -- # val= 00:10:47.273 10:24:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # IFS=: 00:10:47.273 10:24:40 -- accel/accel.sh@20 -- # read -r var val 00:10:47.273 10:24:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.273 10:24:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:47.273 10:24:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.273 00:10:47.273 real 0m4.651s 00:10:47.273 user 0m4.151s 00:10:47.273 sys 0m0.351s 00:10:47.273 10:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.273 10:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:47.273 ************************************ 00:10:47.273 END TEST accel_crc32c_C2 00:10:47.273 ************************************ 00:10:47.273 10:24:40 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:47.273 10:24:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:47.273 10:24:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.273 10:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:47.273 ************************************ 00:10:47.273 START TEST accel_copy 00:10:47.273 ************************************ 00:10:47.273 10:24:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:47.273 10:24:40 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.273 10:24:40 -- accel/accel.sh@17 -- # local accel_module 00:10:47.273 10:24:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:47.273 10:24:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:47.273 10:24:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.273 10:24:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.273 10:24:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.273 10:24:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.273 10:24:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.273 10:24:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.273 10:24:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.273 10:24:40 -- accel/accel.sh@42 -- # jq -r . 00:10:47.273 [2024-07-12 10:24:40.912573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:47.274 [2024-07-12 10:24:40.912811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109010 ] 00:10:47.274 [2024-07-12 10:24:41.079212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.530 [2024-07-12 10:24:41.260912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.485 10:24:43 -- accel/accel.sh@18 -- # out=' 00:10:49.485 SPDK Configuration: 00:10:49.485 Core mask: 0x1 00:10:49.485 00:10:49.485 Accel Perf Configuration: 00:10:49.485 Workload Type: copy 00:10:49.485 Transfer size: 4096 bytes 00:10:49.485 Vector count 1 00:10:49.485 Module: software 00:10:49.485 Queue depth: 32 00:10:49.485 Allocate depth: 32 00:10:49.485 # threads/core: 1 00:10:49.485 Run time: 1 seconds 00:10:49.485 Verify: Yes 00:10:49.485 00:10:49.485 Running for 1 seconds... 00:10:49.485 00:10:49.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.485 ------------------------------------------------------------------------------------ 00:10:49.485 0,0 307744/s 1202 MiB/s 0 0 00:10:49.486 ==================================================================================== 00:10:49.486 Total 307744/s 1202 MiB/s 0 0' 00:10:49.486 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:49.486 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:49.486 10:24:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:49.486 10:24:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:49.486 10:24:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.486 10:24:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.486 10:24:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.486 10:24:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.486 10:24:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.486 10:24:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.486 10:24:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.486 10:24:43 -- accel/accel.sh@42 -- # jq -r . 00:10:49.486 [2024-07-12 10:24:43.242366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:49.486 [2024-07-12 10:24:43.242736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109044 ] 00:10:49.486 [2024-07-12 10:24:43.408316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.743 [2024-07-12 10:24:43.592860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.002 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.002 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.002 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.002 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.002 10:24:43 -- accel/accel.sh@21 -- # val=0x1 00:10:50.002 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.002 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=copy 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=software 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=32 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=32 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=1 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val=Yes 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:50.003 10:24:43 -- accel/accel.sh@21 -- # val= 00:10:50.003 10:24:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # IFS=: 00:10:50.003 10:24:43 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@21 -- # val= 00:10:51.905 10:24:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # IFS=: 00:10:51.905 10:24:45 -- accel/accel.sh@20 -- # read -r var val 00:10:51.905 10:24:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:51.905 10:24:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:51.906 10:24:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.906 00:10:51.906 real 0m4.658s 00:10:51.906 user 0m4.170s 00:10:51.906 sys 0m0.330s 00:10:51.906 10:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.906 10:24:45 -- common/autotest_common.sh@10 -- # set +x 00:10:51.906 ************************************ 00:10:51.906 END TEST accel_copy 00:10:51.906 ************************************ 00:10:51.906 10:24:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.906 10:24:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:51.906 10:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.906 10:24:45 -- common/autotest_common.sh@10 -- # set +x 00:10:51.906 ************************************ 00:10:51.906 START TEST accel_fill 00:10:51.906 ************************************ 00:10:51.906 10:24:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.906 10:24:45 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.906 10:24:45 -- accel/accel.sh@17 -- # local accel_module 00:10:51.906 10:24:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.906 10:24:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.906 10:24:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.906 10:24:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.906 10:24:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.906 10:24:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.906 10:24:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.906 10:24:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.906 10:24:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.906 10:24:45 -- accel/accel.sh@42 -- # jq -r . 00:10:51.906 [2024-07-12 10:24:45.625367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:51.906 [2024-07-12 10:24:45.625585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109113 ] 00:10:51.906 [2024-07-12 10:24:45.791728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.164 [2024-07-12 10:24:45.986741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.107 10:24:47 -- accel/accel.sh@18 -- # out=' 00:10:54.107 SPDK Configuration: 00:10:54.107 Core mask: 0x1 00:10:54.107 00:10:54.107 Accel Perf Configuration: 00:10:54.107 Workload Type: fill 00:10:54.107 Fill pattern: 0x80 00:10:54.107 Transfer size: 4096 bytes 00:10:54.107 Vector count 1 00:10:54.107 Module: software 00:10:54.107 Queue depth: 64 00:10:54.107 Allocate depth: 64 00:10:54.107 # threads/core: 1 00:10:54.107 Run time: 1 seconds 00:10:54.107 Verify: Yes 00:10:54.107 00:10:54.107 Running for 1 seconds... 00:10:54.107 00:10:54.107 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.107 ------------------------------------------------------------------------------------ 00:10:54.107 0,0 458944/s 1792 MiB/s 0 0 00:10:54.107 ==================================================================================== 00:10:54.107 Total 458944/s 1792 MiB/s 0 0' 00:10:54.107 10:24:47 -- accel/accel.sh@20 -- # IFS=: 00:10:54.107 10:24:47 -- accel/accel.sh@20 -- # read -r var val 00:10:54.107 10:24:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:54.107 10:24:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:54.107 10:24:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.107 10:24:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.107 10:24:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.107 10:24:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.107 10:24:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.107 10:24:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.107 10:24:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.107 10:24:47 -- accel/accel.sh@42 -- # jq -r . 00:10:54.107 [2024-07-12 10:24:47.950622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:54.107 [2024-07-12 10:24:47.950816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109147 ] 00:10:54.366 [2024-07-12 10:24:48.104764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.625 [2024-07-12 10:24:48.314809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=0x1 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=fill 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=0x80 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=software 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=64 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=64 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=1 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val=Yes 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:54.625 10:24:48 -- accel/accel.sh@21 -- # val= 00:10:54.625 10:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # IFS=: 00:10:54.625 10:24:48 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 10:24:50 -- accel/accel.sh@21 -- # val= 00:10:56.527 10:24:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # IFS=: 00:10:56.527 10:24:50 -- accel/accel.sh@20 -- # read -r var val 00:10:56.527 ************************************ 00:10:56.527 END TEST accel_fill 00:10:56.527 ************************************ 00:10:56.527 10:24:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.527 10:24:50 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:56.527 10:24:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.527 00:10:56.527 real 0m4.660s 00:10:56.527 user 0m4.170s 00:10:56.527 sys 0m0.337s 00:10:56.527 10:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.527 10:24:50 -- common/autotest_common.sh@10 -- # set +x 00:10:56.527 10:24:50 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:56.527 10:24:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:56.527 10:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:56.527 10:24:50 -- common/autotest_common.sh@10 -- # set +x 00:10:56.527 ************************************ 00:10:56.527 START TEST accel_copy_crc32c 00:10:56.527 ************************************ 00:10:56.527 10:24:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:56.527 10:24:50 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.527 10:24:50 -- accel/accel.sh@17 -- # local accel_module 00:10:56.527 10:24:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:56.527 10:24:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:56.527 10:24:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.528 10:24:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.528 10:24:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.528 10:24:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.528 10:24:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.528 10:24:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.528 10:24:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.528 10:24:50 -- accel/accel.sh@42 -- # jq -r . 00:10:56.528 [2024-07-12 10:24:50.339086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:56.528 [2024-07-12 10:24:50.339998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109194 ] 00:10:56.785 [2024-07-12 10:24:50.507698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.785 [2024-07-12 10:24:50.702299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.315 10:24:52 -- accel/accel.sh@18 -- # out=' 00:10:59.315 SPDK Configuration: 00:10:59.315 Core mask: 0x1 00:10:59.315 00:10:59.315 Accel Perf Configuration: 00:10:59.315 Workload Type: copy_crc32c 00:10:59.315 CRC-32C seed: 0 00:10:59.315 Vector size: 4096 bytes 00:10:59.315 Transfer size: 4096 bytes 00:10:59.315 Vector count 1 00:10:59.315 Module: software 00:10:59.315 Queue depth: 32 00:10:59.315 Allocate depth: 32 00:10:59.315 # threads/core: 1 00:10:59.315 Run time: 1 seconds 00:10:59.315 Verify: Yes 00:10:59.315 00:10:59.315 Running for 1 seconds... 00:10:59.315 00:10:59.315 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:59.315 ------------------------------------------------------------------------------------ 00:10:59.315 0,0 246912/s 964 MiB/s 0 0 00:10:59.315 ==================================================================================== 00:10:59.315 Total 246912/s 964 MiB/s 0 0' 00:10:59.315 10:24:52 -- accel/accel.sh@20 -- # IFS=: 00:10:59.315 10:24:52 -- accel/accel.sh@20 -- # read -r var val 00:10:59.315 10:24:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:59.315 10:24:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:59.315 10:24:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.315 10:24:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.315 10:24:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.315 10:24:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.315 10:24:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.315 10:24:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.315 10:24:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.315 10:24:52 -- accel/accel.sh@42 -- # jq -r . 00:10:59.315 [2024-07-12 10:24:52.658118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:59.315 [2024-07-12 10:24:52.658281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109228 ] 00:10:59.315 [2024-07-12 10:24:52.811740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.315 [2024-07-12 10:24:53.000500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.315 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.315 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=0x1 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=0 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=software 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@23 -- # accel_module=software 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=32 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=32 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=1 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val=Yes 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:10:59.316 10:24:53 -- accel/accel.sh@21 -- # val= 00:10:59.316 10:24:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # IFS=: 00:10:59.316 10:24:53 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 10:24:54 -- accel/accel.sh@21 -- # val= 00:11:01.219 10:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # IFS=: 00:11:01.219 10:24:54 -- accel/accel.sh@20 -- # read -r var val 00:11:01.219 ************************************ 00:11:01.219 END TEST accel_copy_crc32c 00:11:01.219 ************************************ 00:11:01.219 10:24:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:01.219 10:24:55 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:01.219 10:24:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:01.219 00:11:01.219 real 0m4.708s 00:11:01.219 user 0m4.245s 00:11:01.219 sys 0m0.321s 00:11:01.220 10:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.220 10:24:55 -- common/autotest_common.sh@10 -- # set +x 00:11:01.220 10:24:55 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:01.220 10:24:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:01.220 10:24:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.220 10:24:55 -- common/autotest_common.sh@10 -- # set +x 00:11:01.220 ************************************ 00:11:01.220 START TEST accel_copy_crc32c_C2 00:11:01.220 ************************************ 00:11:01.220 10:24:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:01.220 10:24:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:01.220 10:24:55 -- accel/accel.sh@17 -- # local accel_module 00:11:01.220 10:24:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:01.220 10:24:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:01.220 10:24:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.220 10:24:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.220 10:24:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.220 10:24:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.220 10:24:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.220 10:24:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.220 10:24:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.220 10:24:55 -- accel/accel.sh@42 -- # jq -r . 00:11:01.220 [2024-07-12 10:24:55.092056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:01.220 [2024-07-12 10:24:55.092654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109298 ] 00:11:01.478 [2024-07-12 10:24:55.259047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.737 [2024-07-12 10:24:55.433915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.639 10:24:57 -- accel/accel.sh@18 -- # out=' 00:11:03.639 SPDK Configuration: 00:11:03.639 Core mask: 0x1 00:11:03.639 00:11:03.639 Accel Perf Configuration: 00:11:03.639 Workload Type: copy_crc32c 00:11:03.639 CRC-32C seed: 0 00:11:03.639 Vector size: 4096 bytes 00:11:03.639 Transfer size: 8192 bytes 00:11:03.639 Vector count 2 00:11:03.639 Module: software 00:11:03.639 Queue depth: 32 00:11:03.639 Allocate depth: 32 00:11:03.639 # threads/core: 1 00:11:03.639 Run time: 1 seconds 00:11:03.639 Verify: Yes 00:11:03.639 00:11:03.639 Running for 1 seconds... 00:11:03.639 00:11:03.639 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.639 ------------------------------------------------------------------------------------ 00:11:03.639 0,0 180096/s 1407 MiB/s 0 0 00:11:03.639 ==================================================================================== 00:11:03.639 Total 180096/s 703 MiB/s 0 0' 00:11:03.639 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:03.639 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:03.639 10:24:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:03.639 10:24:57 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.639 10:24:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:03.639 10:24:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.639 10:24:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.639 10:24:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.639 10:24:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.639 10:24:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.639 10:24:57 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.639 10:24:57 -- accel/accel.sh@42 -- # jq -r . 00:11:03.639 [2024-07-12 10:24:57.415274] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:03.639 [2024-07-12 10:24:57.415494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109338 ] 00:11:03.898 [2024-07-12 10:24:57.583142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.898 [2024-07-12 10:24:57.753018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=0x1 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=0 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=software 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@23 -- # accel_module=software 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=32 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=32 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=1 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val=Yes 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:04.157 10:24:57 -- accel/accel.sh@21 -- # val= 00:11:04.157 10:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # IFS=: 00:11:04.157 10:24:57 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 10:24:59 -- accel/accel.sh@21 -- # val= 00:11:06.059 10:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # IFS=: 00:11:06.059 10:24:59 -- accel/accel.sh@20 -- # read -r var val 00:11:06.059 ************************************ 00:11:06.059 END TEST accel_copy_crc32c_C2 00:11:06.059 ************************************ 00:11:06.059 10:24:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:06.059 10:24:59 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:06.059 10:24:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.059 00:11:06.059 real 0m4.654s 00:11:06.059 user 0m4.145s 00:11:06.059 sys 0m0.364s 00:11:06.059 10:24:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.059 10:24:59 -- common/autotest_common.sh@10 -- # set +x 00:11:06.059 10:24:59 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:06.059 10:24:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:06.059 10:24:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.059 10:24:59 -- common/autotest_common.sh@10 -- # set +x 00:11:06.059 ************************************ 00:11:06.059 START TEST accel_dualcast 00:11:06.059 ************************************ 00:11:06.059 10:24:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:06.059 10:24:59 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.059 10:24:59 -- accel/accel.sh@17 -- # local accel_module 00:11:06.059 10:24:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:06.059 10:24:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:06.059 10:24:59 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.059 10:24:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.059 10:24:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.059 10:24:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.059 10:24:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.059 10:24:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.059 10:24:59 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.059 10:24:59 -- accel/accel.sh@42 -- # jq -r . 00:11:06.059 [2024-07-12 10:24:59.794022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:06.059 [2024-07-12 10:24:59.794221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109385 ] 00:11:06.059 [2024-07-12 10:24:59.959285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.317 [2024-07-12 10:25:00.144329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.219 10:25:02 -- accel/accel.sh@18 -- # out=' 00:11:08.219 SPDK Configuration: 00:11:08.219 Core mask: 0x1 00:11:08.219 00:11:08.219 Accel Perf Configuration: 00:11:08.219 Workload Type: dualcast 00:11:08.219 Transfer size: 4096 bytes 00:11:08.219 Vector count 1 00:11:08.219 Module: software 00:11:08.219 Queue depth: 32 00:11:08.219 Allocate depth: 32 00:11:08.219 # threads/core: 1 00:11:08.219 Run time: 1 seconds 00:11:08.219 Verify: Yes 00:11:08.219 00:11:08.219 Running for 1 seconds... 00:11:08.219 00:11:08.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.219 ------------------------------------------------------------------------------------ 00:11:08.219 0,0 321088/s 1254 MiB/s 0 0 00:11:08.219 ==================================================================================== 00:11:08.219 Total 321088/s 1254 MiB/s 0 0' 00:11:08.219 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:08.219 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:08.219 10:25:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:08.219 10:25:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:08.219 10:25:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.219 10:25:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.219 10:25:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.219 10:25:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.219 10:25:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.219 10:25:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.219 10:25:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.219 10:25:02 -- accel/accel.sh@42 -- # jq -r . 00:11:08.219 [2024-07-12 10:25:02.135406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:08.219 [2024-07-12 10:25:02.135610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109426 ] 00:11:08.482 [2024-07-12 10:25:02.303080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.744 [2024-07-12 10:25:02.482487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.744 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:08.744 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.744 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=0x1 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=dualcast 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=software 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@23 -- # accel_module=software 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=32 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=32 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=1 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val=Yes 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:09.003 10:25:02 -- accel/accel.sh@21 -- # val= 00:11:09.003 10:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # IFS=: 00:11:09.003 10:25:02 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@21 -- # val= 00:11:10.911 10:25:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # IFS=: 00:11:10.911 10:25:04 -- accel/accel.sh@20 -- # read -r var val 00:11:10.911 10:25:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.911 10:25:04 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:10.911 10:25:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.911 00:11:10.911 real 0m4.656s 00:11:10.911 user 0m4.182s 00:11:10.911 sys 0m0.341s 00:11:10.911 10:25:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.911 ************************************ 00:11:10.911 END TEST accel_dualcast 00:11:10.911 ************************************ 00:11:10.911 10:25:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.911 10:25:04 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:10.911 10:25:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:10.911 10:25:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.911 10:25:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.911 ************************************ 00:11:10.911 START TEST accel_compare 00:11:10.911 ************************************ 00:11:10.911 10:25:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:10.911 10:25:04 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.911 10:25:04 -- accel/accel.sh@17 -- # local accel_module 00:11:10.911 10:25:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:10.911 10:25:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:10.911 10:25:04 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.911 10:25:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.911 10:25:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.911 10:25:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.911 10:25:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.911 10:25:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.911 10:25:04 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.911 10:25:04 -- accel/accel.sh@42 -- # jq -r . 00:11:10.911 [2024-07-12 10:25:04.503759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:10.911 [2024-07-12 10:25:04.503984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109474 ] 00:11:10.912 [2024-07-12 10:25:04.670727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.170 [2024-07-12 10:25:04.873772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.073 10:25:06 -- accel/accel.sh@18 -- # out=' 00:11:13.073 SPDK Configuration: 00:11:13.073 Core mask: 0x1 00:11:13.073 00:11:13.073 Accel Perf Configuration: 00:11:13.073 Workload Type: compare 00:11:13.073 Transfer size: 4096 bytes 00:11:13.073 Vector count 1 00:11:13.073 Module: software 00:11:13.073 Queue depth: 32 00:11:13.073 Allocate depth: 32 00:11:13.073 # threads/core: 1 00:11:13.073 Run time: 1 seconds 00:11:13.073 Verify: Yes 00:11:13.073 00:11:13.073 Running for 1 seconds... 00:11:13.073 00:11:13.073 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:13.073 ------------------------------------------------------------------------------------ 00:11:13.073 0,0 442176/s 1727 MiB/s 0 0 00:11:13.073 ==================================================================================== 00:11:13.073 Total 442176/s 1727 MiB/s 0 0' 00:11:13.073 10:25:06 -- accel/accel.sh@20 -- # IFS=: 00:11:13.073 10:25:06 -- accel/accel.sh@20 -- # read -r var val 00:11:13.073 10:25:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:13.073 10:25:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:13.073 10:25:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.073 10:25:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.073 10:25:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.073 10:25:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.073 10:25:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.073 10:25:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.073 10:25:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.073 10:25:06 -- accel/accel.sh@42 -- # jq -r . 00:11:13.073 [2024-07-12 10:25:06.844202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:13.073 [2024-07-12 10:25:06.844400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109525 ] 00:11:13.332 [2024-07-12 10:25:06.998998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.332 [2024-07-12 10:25:07.220656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=0x1 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=compare 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=software 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@23 -- # accel_module=software 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=32 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=32 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=1 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val=Yes 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:13.589 10:25:07 -- accel/accel.sh@21 -- # val= 00:11:13.589 10:25:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # IFS=: 00:11:13.589 10:25:07 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@21 -- # val= 00:11:15.492 10:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # IFS=: 00:11:15.492 10:25:09 -- accel/accel.sh@20 -- # read -r var val 00:11:15.492 10:25:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.492 10:25:09 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:15.492 10:25:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.492 00:11:15.492 real 0m4.824s 00:11:15.492 user 0m4.276s 00:11:15.492 sys 0m0.389s 00:11:15.492 10:25:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.492 10:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:15.492 ************************************ 00:11:15.492 END TEST accel_compare 00:11:15.492 ************************************ 00:11:15.492 10:25:09 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:15.492 10:25:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:15.492 10:25:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.492 10:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:15.492 ************************************ 00:11:15.492 START TEST accel_xor 00:11:15.493 ************************************ 00:11:15.493 10:25:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:15.493 10:25:09 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.493 10:25:09 -- accel/accel.sh@17 -- # local accel_module 00:11:15.493 10:25:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:15.493 10:25:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:15.493 10:25:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.493 10:25:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.493 10:25:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.493 10:25:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.493 10:25:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.493 10:25:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.493 10:25:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.493 10:25:09 -- accel/accel.sh@42 -- # jq -r . 00:11:15.493 [2024-07-12 10:25:09.383519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:15.493 [2024-07-12 10:25:09.383833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109585 ] 00:11:15.750 [2024-07-12 10:25:09.554877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.008 [2024-07-12 10:25:09.765352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.909 10:25:11 -- accel/accel.sh@18 -- # out=' 00:11:17.909 SPDK Configuration: 00:11:17.909 Core mask: 0x1 00:11:17.909 00:11:17.909 Accel Perf Configuration: 00:11:17.909 Workload Type: xor 00:11:17.909 Source buffers: 2 00:11:17.909 Transfer size: 4096 bytes 00:11:17.909 Vector count 1 00:11:17.909 Module: software 00:11:17.909 Queue depth: 32 00:11:17.909 Allocate depth: 32 00:11:17.909 # threads/core: 1 00:11:17.909 Run time: 1 seconds 00:11:17.909 Verify: Yes 00:11:17.909 00:11:17.909 Running for 1 seconds... 00:11:17.909 00:11:17.909 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.909 ------------------------------------------------------------------------------------ 00:11:17.909 0,0 223008/s 871 MiB/s 0 0 00:11:17.909 ==================================================================================== 00:11:17.909 Total 223008/s 871 MiB/s 0 0' 00:11:17.909 10:25:11 -- accel/accel.sh@20 -- # IFS=: 00:11:17.909 10:25:11 -- accel/accel.sh@20 -- # read -r var val 00:11:17.909 10:25:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:17.909 10:25:11 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.909 10:25:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:17.909 10:25:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.909 10:25:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.909 10:25:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.909 10:25:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.909 10:25:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.909 10:25:11 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.909 10:25:11 -- accel/accel.sh@42 -- # jq -r . 00:11:17.909 [2024-07-12 10:25:11.753108] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:17.909 [2024-07-12 10:25:11.753316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109618 ] 00:11:18.167 [2024-07-12 10:25:11.922648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.425 [2024-07-12 10:25:12.128982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=0x1 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=xor 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=2 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=software 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=32 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=32 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=1 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val=Yes 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:18.425 10:25:12 -- accel/accel.sh@21 -- # val= 00:11:18.425 10:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # IFS=: 00:11:18.425 10:25:12 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@21 -- # val= 00:11:20.325 10:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # IFS=: 00:11:20.325 10:25:14 -- accel/accel.sh@20 -- # read -r var val 00:11:20.325 10:25:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:20.325 10:25:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:20.325 10:25:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.325 00:11:20.325 real 0m4.720s 00:11:20.325 user 0m4.199s 00:11:20.325 sys 0m0.380s 00:11:20.325 10:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.325 10:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 ************************************ 00:11:20.325 END TEST accel_xor 00:11:20.325 ************************************ 00:11:20.325 10:25:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:20.325 10:25:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:20.325 10:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.325 10:25:14 -- common/autotest_common.sh@10 -- # set +x 00:11:20.325 ************************************ 00:11:20.325 START TEST accel_xor 00:11:20.325 ************************************ 00:11:20.325 10:25:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:20.325 10:25:14 -- accel/accel.sh@16 -- # local accel_opc 00:11:20.325 10:25:14 -- accel/accel.sh@17 -- # local accel_module 00:11:20.325 10:25:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:20.325 10:25:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:20.325 10:25:14 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.325 10:25:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.325 10:25:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.325 10:25:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.325 10:25:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.325 10:25:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.325 10:25:14 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.325 10:25:14 -- accel/accel.sh@42 -- # jq -r . 00:11:20.325 [2024-07-12 10:25:14.144048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:20.325 [2024-07-12 10:25:14.144261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109672 ] 00:11:20.583 [2024-07-12 10:25:14.301137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.583 [2024-07-12 10:25:14.490662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.142 10:25:16 -- accel/accel.sh@18 -- # out=' 00:11:23.142 SPDK Configuration: 00:11:23.142 Core mask: 0x1 00:11:23.142 00:11:23.142 Accel Perf Configuration: 00:11:23.142 Workload Type: xor 00:11:23.142 Source buffers: 3 00:11:23.142 Transfer size: 4096 bytes 00:11:23.142 Vector count 1 00:11:23.142 Module: software 00:11:23.142 Queue depth: 32 00:11:23.142 Allocate depth: 32 00:11:23.142 # threads/core: 1 00:11:23.142 Run time: 1 seconds 00:11:23.142 Verify: Yes 00:11:23.142 00:11:23.142 Running for 1 seconds... 00:11:23.142 00:11:23.142 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:23.142 ------------------------------------------------------------------------------------ 00:11:23.142 0,0 217824/s 850 MiB/s 0 0 00:11:23.142 ==================================================================================== 00:11:23.142 Total 217824/s 850 MiB/s 0 0' 00:11:23.142 10:25:16 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:16 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:23.142 10:25:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:23.142 10:25:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.142 10:25:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.142 10:25:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.142 10:25:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.142 10:25:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.142 10:25:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.142 10:25:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.142 10:25:16 -- accel/accel.sh@42 -- # jq -r . 00:11:23.142 [2024-07-12 10:25:16.489814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:23.142 [2024-07-12 10:25:16.490031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109720 ] 00:11:23.142 [2024-07-12 10:25:16.658259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.142 [2024-07-12 10:25:16.854518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=0x1 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=xor 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=3 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=software 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@23 -- # accel_module=software 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=32 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=32 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=1 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val=Yes 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:23.142 10:25:17 -- accel/accel.sh@21 -- # val= 00:11:23.142 10:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # IFS=: 00:11:23.142 10:25:17 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@21 -- # val= 00:11:25.042 10:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # IFS=: 00:11:25.042 10:25:18 -- accel/accel.sh@20 -- # read -r var val 00:11:25.042 10:25:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:25.042 10:25:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:25.042 10:25:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:25.042 00:11:25.042 real 0m4.700s 00:11:25.042 user 0m4.195s 00:11:25.042 sys 0m0.363s 00:11:25.042 10:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.042 ************************************ 00:11:25.042 END TEST accel_xor 00:11:25.042 ************************************ 00:11:25.042 10:25:18 -- common/autotest_common.sh@10 -- # set +x 00:11:25.042 10:25:18 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:25.042 10:25:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:25.042 10:25:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.042 10:25:18 -- common/autotest_common.sh@10 -- # set +x 00:11:25.042 ************************************ 00:11:25.042 START TEST accel_dif_verify 00:11:25.042 ************************************ 00:11:25.042 10:25:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:25.042 10:25:18 -- accel/accel.sh@16 -- # local accel_opc 00:11:25.042 10:25:18 -- accel/accel.sh@17 -- # local accel_module 00:11:25.042 10:25:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:25.042 10:25:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:25.042 10:25:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.042 10:25:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.042 10:25:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.042 10:25:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.042 10:25:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.042 10:25:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.042 10:25:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.042 10:25:18 -- accel/accel.sh@42 -- # jq -r . 00:11:25.042 [2024-07-12 10:25:18.901898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:25.042 [2024-07-12 10:25:18.902098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109767 ] 00:11:25.300 [2024-07-12 10:25:19.069922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.558 [2024-07-12 10:25:19.261520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.461 10:25:21 -- accel/accel.sh@18 -- # out=' 00:11:27.461 SPDK Configuration: 00:11:27.461 Core mask: 0x1 00:11:27.461 00:11:27.461 Accel Perf Configuration: 00:11:27.461 Workload Type: dif_verify 00:11:27.461 Vector size: 4096 bytes 00:11:27.461 Transfer size: 4096 bytes 00:11:27.461 Block size: 512 bytes 00:11:27.461 Metadata size: 8 bytes 00:11:27.461 Vector count 1 00:11:27.461 Module: software 00:11:27.461 Queue depth: 32 00:11:27.461 Allocate depth: 32 00:11:27.461 # threads/core: 1 00:11:27.461 Run time: 1 seconds 00:11:27.461 Verify: No 00:11:27.461 00:11:27.461 Running for 1 seconds... 00:11:27.461 00:11:27.461 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:27.461 ------------------------------------------------------------------------------------ 00:11:27.461 0,0 113376/s 449 MiB/s 0 0 00:11:27.461 ==================================================================================== 00:11:27.461 Total 113376/s 442 MiB/s 0 0' 00:11:27.461 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.461 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.461 10:25:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:27.461 10:25:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.461 10:25:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:27.461 10:25:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.461 10:25:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.461 10:25:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.461 10:25:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.461 10:25:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.461 10:25:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.461 10:25:21 -- accel/accel.sh@42 -- # jq -r . 00:11:27.461 [2024-07-12 10:25:21.231467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:27.461 [2024-07-12 10:25:21.231796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109808 ] 00:11:27.720 [2024-07-12 10:25:21.401018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.720 [2024-07-12 10:25:21.600526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=0x1 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=dif_verify 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=software 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@23 -- # accel_module=software 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=32 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=32 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=1 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val=No 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:27.978 10:25:21 -- accel/accel.sh@21 -- # val= 00:11:27.978 10:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # IFS=: 00:11:27.978 10:25:21 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@21 -- # val= 00:11:29.879 10:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # IFS=: 00:11:29.879 10:25:23 -- accel/accel.sh@20 -- # read -r var val 00:11:29.879 10:25:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.879 10:25:23 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:29.879 10:25:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.879 00:11:29.879 real 0m4.676s 00:11:29.879 user 0m4.190s 00:11:29.879 sys 0m0.326s 00:11:29.879 10:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.879 ************************************ 00:11:29.879 END TEST accel_dif_verify 00:11:29.879 ************************************ 00:11:29.879 10:25:23 -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 10:25:23 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:29.879 10:25:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:29.879 10:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.879 10:25:23 -- common/autotest_common.sh@10 -- # set +x 00:11:29.879 ************************************ 00:11:29.879 START TEST accel_dif_generate 00:11:29.879 ************************************ 00:11:29.879 10:25:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:29.879 10:25:23 -- accel/accel.sh@16 -- # local accel_opc 00:11:29.879 10:25:23 -- accel/accel.sh@17 -- # local accel_module 00:11:29.879 10:25:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:29.879 10:25:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:29.879 10:25:23 -- accel/accel.sh@12 -- # build_accel_config 00:11:29.879 10:25:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.879 10:25:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.879 10:25:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.879 10:25:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.879 10:25:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.879 10:25:23 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.879 10:25:23 -- accel/accel.sh@42 -- # jq -r . 00:11:29.879 [2024-07-12 10:25:23.634197] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:29.879 [2024-07-12 10:25:23.634392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109856 ] 00:11:29.879 [2024-07-12 10:25:23.802065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.137 [2024-07-12 10:25:23.997465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.038 10:25:25 -- accel/accel.sh@18 -- # out=' 00:11:32.038 SPDK Configuration: 00:11:32.038 Core mask: 0x1 00:11:32.038 00:11:32.038 Accel Perf Configuration: 00:11:32.038 Workload Type: dif_generate 00:11:32.038 Vector size: 4096 bytes 00:11:32.038 Transfer size: 4096 bytes 00:11:32.038 Block size: 512 bytes 00:11:32.038 Metadata size: 8 bytes 00:11:32.038 Vector count 1 00:11:32.038 Module: software 00:11:32.038 Queue depth: 32 00:11:32.038 Allocate depth: 32 00:11:32.038 # threads/core: 1 00:11:32.038 Run time: 1 seconds 00:11:32.038 Verify: No 00:11:32.038 00:11:32.038 Running for 1 seconds... 00:11:32.038 00:11:32.038 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:32.038 ------------------------------------------------------------------------------------ 00:11:32.038 0,0 133664/s 530 MiB/s 0 0 00:11:32.038 ==================================================================================== 00:11:32.038 Total 133664/s 522 MiB/s 0 0' 00:11:32.038 10:25:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:32.038 10:25:25 -- accel/accel.sh@20 -- # IFS=: 00:11:32.038 10:25:25 -- accel/accel.sh@20 -- # read -r var val 00:11:32.038 10:25:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:32.038 10:25:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.038 10:25:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:32.038 10:25:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.038 10:25:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.038 10:25:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:32.038 10:25:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:32.038 10:25:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:32.038 10:25:25 -- accel/accel.sh@42 -- # jq -r . 00:11:32.296 [2024-07-12 10:25:25.990044] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:32.296 [2024-07-12 10:25:25.990233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109912 ] 00:11:32.296 [2024-07-12 10:25:26.156999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.554 [2024-07-12 10:25:26.359864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val=0x1 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val=dif_generate 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:32.812 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.812 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.812 10:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val=software 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@23 -- # accel_module=software 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val=32 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val=32 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val=1 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val=No 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:32.813 10:25:26 -- accel/accel.sh@21 -- # val= 00:11:32.813 10:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # IFS=: 00:11:32.813 10:25:26 -- accel/accel.sh@20 -- # read -r var val 00:11:34.714 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.714 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.714 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.714 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.714 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.714 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.714 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.714 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.714 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.715 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.715 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.715 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.715 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.715 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.715 10:25:28 -- accel/accel.sh@21 -- # val= 00:11:34.715 10:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.715 10:25:28 -- accel/accel.sh@20 -- # IFS=: 00:11:34.715 10:25:28 -- accel/accel.sh@20 -- # read -r var val 00:11:34.715 10:25:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:34.715 10:25:28 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:34.715 10:25:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:34.715 00:11:34.715 real 0m4.732s 00:11:34.715 user 0m4.263s 00:11:34.715 sys 0m0.333s 00:11:34.715 10:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.715 10:25:28 -- common/autotest_common.sh@10 -- # set +x 00:11:34.715 ************************************ 00:11:34.715 END TEST accel_dif_generate 00:11:34.715 ************************************ 00:11:34.715 10:25:28 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:34.715 10:25:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:34.715 10:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.715 10:25:28 -- common/autotest_common.sh@10 -- # set +x 00:11:34.715 ************************************ 00:11:34.715 START TEST accel_dif_generate_copy 00:11:34.715 ************************************ 00:11:34.715 10:25:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:34.715 10:25:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:34.715 10:25:28 -- accel/accel.sh@17 -- # local accel_module 00:11:34.715 10:25:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:34.715 10:25:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:34.715 10:25:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.715 10:25:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.715 10:25:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.715 10:25:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.715 10:25:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.715 10:25:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.715 10:25:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.715 10:25:28 -- accel/accel.sh@42 -- # jq -r . 00:11:34.715 [2024-07-12 10:25:28.422561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:34.715 [2024-07-12 10:25:28.422751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109967 ] 00:11:34.715 [2024-07-12 10:25:28.588481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.973 [2024-07-12 10:25:28.771735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.875 10:25:30 -- accel/accel.sh@18 -- # out=' 00:11:36.875 SPDK Configuration: 00:11:36.875 Core mask: 0x1 00:11:36.875 00:11:36.875 Accel Perf Configuration: 00:11:36.875 Workload Type: dif_generate_copy 00:11:36.875 Vector size: 4096 bytes 00:11:36.875 Transfer size: 4096 bytes 00:11:36.875 Vector count 1 00:11:36.875 Module: software 00:11:36.875 Queue depth: 32 00:11:36.875 Allocate depth: 32 00:11:36.875 # threads/core: 1 00:11:36.875 Run time: 1 seconds 00:11:36.875 Verify: No 00:11:36.875 00:11:36.875 Running for 1 seconds... 00:11:36.875 00:11:36.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.875 ------------------------------------------------------------------------------------ 00:11:36.875 0,0 102144/s 405 MiB/s 0 0 00:11:36.875 ==================================================================================== 00:11:36.875 Total 102144/s 399 MiB/s 0 0' 00:11:36.875 10:25:30 -- accel/accel.sh@20 -- # IFS=: 00:11:36.875 10:25:30 -- accel/accel.sh@20 -- # read -r var val 00:11:36.875 10:25:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:36.875 10:25:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:36.875 10:25:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.875 10:25:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.875 10:25:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.875 10:25:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.875 10:25:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.875 10:25:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.875 10:25:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.875 10:25:30 -- accel/accel.sh@42 -- # jq -r . 00:11:36.875 [2024-07-12 10:25:30.748425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:36.875 [2024-07-12 10:25:30.748684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110004 ] 00:11:37.134 [2024-07-12 10:25:30.913677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.393 [2024-07-12 10:25:31.097716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.393 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.393 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.393 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.393 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.393 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.393 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.393 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.393 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.393 10:25:31 -- accel/accel.sh@21 -- # val=0x1 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=software 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@23 -- # accel_module=software 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=32 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=32 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=1 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val=No 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:37.394 10:25:31 -- accel/accel.sh@21 -- # val= 00:11:37.394 10:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # IFS=: 00:11:37.394 10:25:31 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 10:25:33 -- accel/accel.sh@21 -- # val= 00:11:39.295 10:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # IFS=: 00:11:39.295 10:25:33 -- accel/accel.sh@20 -- # read -r var val 00:11:39.295 ************************************ 00:11:39.295 END TEST accel_dif_generate_copy 00:11:39.295 ************************************ 00:11:39.295 10:25:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:39.295 10:25:33 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:39.295 10:25:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.295 00:11:39.295 real 0m4.659s 00:11:39.295 user 0m4.155s 00:11:39.295 sys 0m0.370s 00:11:39.295 10:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.295 10:25:33 -- common/autotest_common.sh@10 -- # set +x 00:11:39.295 10:25:33 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:39.295 10:25:33 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.295 10:25:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:39.295 10:25:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.295 10:25:33 -- common/autotest_common.sh@10 -- # set +x 00:11:39.295 ************************************ 00:11:39.295 START TEST accel_comp 00:11:39.295 ************************************ 00:11:39.295 10:25:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.295 10:25:33 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.295 10:25:33 -- accel/accel.sh@17 -- # local accel_module 00:11:39.295 10:25:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.295 10:25:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.295 10:25:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.295 10:25:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.295 10:25:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.295 10:25:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.295 10:25:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.295 10:25:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.295 10:25:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.295 10:25:33 -- accel/accel.sh@42 -- # jq -r . 00:11:39.295 [2024-07-12 10:25:33.134192] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:39.295 [2024-07-12 10:25:33.134352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110050 ] 00:11:39.552 [2024-07-12 10:25:33.286701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.810 [2024-07-12 10:25:33.489445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.710 10:25:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:41.710 00:11:41.710 SPDK Configuration: 00:11:41.710 Core mask: 0x1 00:11:41.710 00:11:41.710 Accel Perf Configuration: 00:11:41.710 Workload Type: compress 00:11:41.710 Transfer size: 4096 bytes 00:11:41.710 Vector count 1 00:11:41.710 Module: software 00:11:41.710 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.710 Queue depth: 32 00:11:41.710 Allocate depth: 32 00:11:41.710 # threads/core: 1 00:11:41.710 Run time: 1 seconds 00:11:41.710 Verify: No 00:11:41.710 00:11:41.710 Running for 1 seconds... 00:11:41.710 00:11:41.710 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.710 ------------------------------------------------------------------------------------ 00:11:41.710 0,0 54400/s 226 MiB/s 0 0 00:11:41.710 ==================================================================================== 00:11:41.710 Total 54400/s 212 MiB/s 0 0' 00:11:41.710 10:25:35 -- accel/accel.sh@20 -- # IFS=: 00:11:41.710 10:25:35 -- accel/accel.sh@20 -- # read -r var val 00:11:41.710 10:25:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.710 10:25:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.710 10:25:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.710 10:25:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.710 10:25:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.710 10:25:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.710 10:25:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.710 10:25:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.710 10:25:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.710 10:25:35 -- accel/accel.sh@42 -- # jq -r . 00:11:41.710 [2024-07-12 10:25:35.485204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:41.710 [2024-07-12 10:25:35.485434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110086 ] 00:11:41.968 [2024-07-12 10:25:35.653607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.968 [2024-07-12 10:25:35.877845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.227 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.227 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.227 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.227 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.227 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.227 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.227 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.227 10:25:36 -- accel/accel.sh@21 -- # val=0x1 00:11:42.227 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=compress 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=software 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@23 -- # accel_module=software 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=32 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=32 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=1 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val=No 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:42.228 10:25:36 -- accel/accel.sh@21 -- # val= 00:11:42.228 10:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # IFS=: 00:11:42.228 10:25:36 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@21 -- # val= 00:11:44.132 10:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # IFS=: 00:11:44.132 10:25:37 -- accel/accel.sh@20 -- # read -r var val 00:11:44.132 10:25:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:44.132 10:25:37 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:44.132 10:25:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:44.132 00:11:44.132 real 0m4.837s 00:11:44.132 user 0m4.366s 00:11:44.132 sys 0m0.329s 00:11:44.132 10:25:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.132 ************************************ 00:11:44.132 END TEST accel_comp 00:11:44.132 ************************************ 00:11:44.132 10:25:37 -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 10:25:37 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.132 10:25:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:44.132 10:25:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.132 10:25:37 -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 ************************************ 00:11:44.132 START TEST accel_decomp 00:11:44.132 ************************************ 00:11:44.132 10:25:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.132 10:25:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:44.132 10:25:37 -- accel/accel.sh@17 -- # local accel_module 00:11:44.132 10:25:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.132 10:25:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.132 10:25:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:44.132 10:25:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:44.132 10:25:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:44.132 10:25:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:44.132 10:25:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:44.132 10:25:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:44.132 10:25:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:44.132 10:25:37 -- accel/accel.sh@42 -- # jq -r . 00:11:44.132 [2024-07-12 10:25:38.029918] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:44.132 [2024-07-12 10:25:38.030129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110154 ] 00:11:44.392 [2024-07-12 10:25:38.186007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.651 [2024-07-12 10:25:38.389373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.552 10:25:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:46.552 00:11:46.552 SPDK Configuration: 00:11:46.552 Core mask: 0x1 00:11:46.552 00:11:46.552 Accel Perf Configuration: 00:11:46.552 Workload Type: decompress 00:11:46.552 Transfer size: 4096 bytes 00:11:46.552 Vector count 1 00:11:46.552 Module: software 00:11:46.552 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:46.552 Queue depth: 32 00:11:46.552 Allocate depth: 32 00:11:46.552 # threads/core: 1 00:11:46.552 Run time: 1 seconds 00:11:46.552 Verify: Yes 00:11:46.552 00:11:46.552 Running for 1 seconds... 00:11:46.552 00:11:46.552 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:46.552 ------------------------------------------------------------------------------------ 00:11:46.552 0,0 70464/s 129 MiB/s 0 0 00:11:46.552 ==================================================================================== 00:11:46.552 Total 70464/s 275 MiB/s 0 0' 00:11:46.552 10:25:40 -- accel/accel.sh@20 -- # IFS=: 00:11:46.552 10:25:40 -- accel/accel.sh@20 -- # read -r var val 00:11:46.552 10:25:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:46.552 10:25:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:46.552 10:25:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.552 10:25:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.552 10:25:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.552 10:25:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.552 10:25:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.552 10:25:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.552 10:25:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.552 10:25:40 -- accel/accel.sh@42 -- # jq -r . 00:11:46.810 [2024-07-12 10:25:40.487447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:46.810 [2024-07-12 10:25:40.487628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110193 ] 00:11:46.810 [2024-07-12 10:25:40.650438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.069 [2024-07-12 10:25:40.880194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=0x1 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=decompress 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=software 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@23 -- # accel_module=software 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=32 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=32 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=1 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val=Yes 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:47.329 10:25:41 -- accel/accel.sh@21 -- # val= 00:11:47.329 10:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # IFS=: 00:11:47.329 10:25:41 -- accel/accel.sh@20 -- # read -r var val 00:11:49.236 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@21 -- # val= 00:11:49.237 10:25:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # IFS=: 00:11:49.237 10:25:42 -- accel/accel.sh@20 -- # read -r var val 00:11:49.237 10:25:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:49.237 10:25:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:49.237 10:25:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:49.237 00:11:49.237 real 0m4.968s 00:11:49.237 user 0m4.402s 00:11:49.237 sys 0m0.427s 00:11:49.237 10:25:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.237 10:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:49.237 ************************************ 00:11:49.237 END TEST accel_decomp 00:11:49.237 ************************************ 00:11:49.237 10:25:42 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.237 10:25:42 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:49.237 10:25:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.237 10:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:49.237 ************************************ 00:11:49.237 START TEST accel_decmop_full 00:11:49.237 ************************************ 00:11:49.237 10:25:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.237 10:25:43 -- accel/accel.sh@16 -- # local accel_opc 00:11:49.237 10:25:43 -- accel/accel.sh@17 -- # local accel_module 00:11:49.237 10:25:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.237 10:25:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.237 10:25:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:49.237 10:25:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:49.237 10:25:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:49.237 10:25:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:49.237 10:25:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:49.237 10:25:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:49.237 10:25:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:49.237 10:25:43 -- accel/accel.sh@42 -- # jq -r . 00:11:49.237 [2024-07-12 10:25:43.049820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:49.237 [2024-07-12 10:25:43.049978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110247 ] 00:11:49.502 [2024-07-12 10:25:43.200365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.502 [2024-07-12 10:25:43.414214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.038 10:25:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:52.038 00:11:52.038 SPDK Configuration: 00:11:52.038 Core mask: 0x1 00:11:52.038 00:11:52.038 Accel Perf Configuration: 00:11:52.038 Workload Type: decompress 00:11:52.038 Transfer size: 111250 bytes 00:11:52.038 Vector count 1 00:11:52.038 Module: software 00:11:52.038 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:52.038 Queue depth: 32 00:11:52.038 Allocate depth: 32 00:11:52.038 # threads/core: 1 00:11:52.038 Run time: 1 seconds 00:11:52.038 Verify: Yes 00:11:52.038 00:11:52.038 Running for 1 seconds... 00:11:52.038 00:11:52.038 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:52.038 ------------------------------------------------------------------------------------ 00:11:52.038 0,0 5184/s 214 MiB/s 0 0 00:11:52.038 ==================================================================================== 00:11:52.038 Total 5184/s 550 MiB/s 0 0' 00:11:52.038 10:25:45 -- accel/accel.sh@20 -- # IFS=: 00:11:52.038 10:25:45 -- accel/accel.sh@20 -- # read -r var val 00:11:52.038 10:25:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:52.038 10:25:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:52.038 10:25:45 -- accel/accel.sh@12 -- # build_accel_config 00:11:52.038 10:25:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.038 10:25:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.038 10:25:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.038 10:25:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.038 10:25:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.038 10:25:45 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.038 10:25:45 -- accel/accel.sh@42 -- # jq -r . 00:11:52.038 [2024-07-12 10:25:45.525425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:52.039 [2024-07-12 10:25:45.525657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110281 ] 00:11:52.039 [2024-07-12 10:25:45.693637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.039 [2024-07-12 10:25:45.926562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.297 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.297 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.297 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.297 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.297 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.297 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.297 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=0x1 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=decompress 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=software 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@23 -- # accel_module=software 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=32 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=32 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=1 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val=Yes 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:52.298 10:25:46 -- accel/accel.sh@21 -- # val= 00:11:52.298 10:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # IFS=: 00:11:52.298 10:25:46 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@21 -- # val= 00:11:54.201 10:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # IFS=: 00:11:54.201 10:25:47 -- accel/accel.sh@20 -- # read -r var val 00:11:54.201 10:25:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:54.201 10:25:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:54.201 10:25:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:54.201 00:11:54.201 real 0m4.965s 00:11:54.201 user 0m4.439s 00:11:54.201 sys 0m0.393s 00:11:54.201 10:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.201 ************************************ 00:11:54.201 END TEST accel_decmop_full 00:11:54.201 ************************************ 00:11:54.201 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.201 10:25:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.201 10:25:48 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:54.201 10:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:54.201 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:54.201 ************************************ 00:11:54.201 START TEST accel_decomp_mcore 00:11:54.201 ************************************ 00:11:54.201 10:25:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.201 10:25:48 -- accel/accel.sh@16 -- # local accel_opc 00:11:54.201 10:25:48 -- accel/accel.sh@17 -- # local accel_module 00:11:54.201 10:25:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.201 10:25:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.201 10:25:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.201 10:25:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:54.201 10:25:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.201 10:25:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.201 10:25:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:54.201 10:25:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:54.201 10:25:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:54.201 10:25:48 -- accel/accel.sh@42 -- # jq -r . 00:11:54.201 [2024-07-12 10:25:48.071157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:54.201 [2024-07-12 10:25:48.071829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110349 ] 00:11:54.462 [2024-07-12 10:25:48.243504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.721 [2024-07-12 10:25:48.449921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.721 [2024-07-12 10:25:48.450076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.721 [2024-07-12 10:25:48.450189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.721 [2024-07-12 10:25:48.450187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.254 10:25:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:57.254 00:11:57.254 SPDK Configuration: 00:11:57.254 Core mask: 0xf 00:11:57.254 00:11:57.254 Accel Perf Configuration: 00:11:57.254 Workload Type: decompress 00:11:57.254 Transfer size: 4096 bytes 00:11:57.254 Vector count 1 00:11:57.254 Module: software 00:11:57.254 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:57.254 Queue depth: 32 00:11:57.254 Allocate depth: 32 00:11:57.254 # threads/core: 1 00:11:57.254 Run time: 1 seconds 00:11:57.254 Verify: Yes 00:11:57.254 00:11:57.254 Running for 1 seconds... 00:11:57.254 00:11:57.254 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:57.254 ------------------------------------------------------------------------------------ 00:11:57.254 0,0 47936/s 88 MiB/s 0 0 00:11:57.254 3,0 40512/s 74 MiB/s 0 0 00:11:57.254 2,0 48128/s 88 MiB/s 0 0 00:11:57.254 1,0 43456/s 80 MiB/s 0 0 00:11:57.254 ==================================================================================== 00:11:57.254 Total 180032/s 703 MiB/s 0 0' 00:11:57.254 10:25:50 -- accel/accel.sh@20 -- # IFS=: 00:11:57.254 10:25:50 -- accel/accel.sh@20 -- # read -r var val 00:11:57.254 10:25:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:57.254 10:25:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:57.254 10:25:50 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.254 10:25:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.254 10:25:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.254 10:25:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.254 10:25:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.254 10:25:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.254 10:25:50 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.254 10:25:50 -- accel/accel.sh@42 -- # jq -r . 00:11:57.254 [2024-07-12 10:25:50.592432] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:57.254 [2024-07-12 10:25:50.592642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110391 ] 00:11:57.254 [2024-07-12 10:25:50.777467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.254 [2024-07-12 10:25:51.033165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.254 [2024-07-12 10:25:51.033306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.254 [2024-07-12 10:25:51.033438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.254 [2024-07-12 10:25:51.033446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=0xf 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=decompress 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=software 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@23 -- # accel_module=software 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=32 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=32 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=1 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val=Yes 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:57.513 10:25:51 -- accel/accel.sh@21 -- # val= 00:11:57.513 10:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # IFS=: 00:11:57.513 10:25:51 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@21 -- # val= 00:11:59.415 10:25:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # IFS=: 00:11:59.415 10:25:53 -- accel/accel.sh@20 -- # read -r var val 00:11:59.415 10:25:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:59.415 10:25:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:59.415 10:25:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:59.415 ************************************ 00:11:59.415 END TEST accel_decomp_mcore 00:11:59.415 ************************************ 00:11:59.415 00:11:59.415 real 0m5.105s 00:11:59.415 user 0m14.660s 00:11:59.415 sys 0m0.502s 00:11:59.415 10:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.415 10:25:53 -- common/autotest_common.sh@10 -- # set +x 00:11:59.415 10:25:53 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.415 10:25:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:59.415 10:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:59.415 10:25:53 -- common/autotest_common.sh@10 -- # set +x 00:11:59.415 ************************************ 00:11:59.415 START TEST accel_decomp_full_mcore 00:11:59.415 ************************************ 00:11:59.415 10:25:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.415 10:25:53 -- accel/accel.sh@16 -- # local accel_opc 00:11:59.415 10:25:53 -- accel/accel.sh@17 -- # local accel_module 00:11:59.415 10:25:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.415 10:25:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.415 10:25:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:59.415 10:25:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:59.415 10:25:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.415 10:25:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.415 10:25:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:59.415 10:25:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:59.415 10:25:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:59.415 10:25:53 -- accel/accel.sh@42 -- # jq -r . 00:11:59.415 [2024-07-12 10:25:53.235262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:59.415 [2024-07-12 10:25:53.235507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110448 ] 00:11:59.681 [2024-07-12 10:25:53.422739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.938 [2024-07-12 10:25:53.679262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.938 [2024-07-12 10:25:53.679402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.938 [2024-07-12 10:25:53.679434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.938 [2024-07-12 10:25:53.679439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.467 10:25:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:02.467 00:12:02.467 SPDK Configuration: 00:12:02.467 Core mask: 0xf 00:12:02.467 00:12:02.467 Accel Perf Configuration: 00:12:02.467 Workload Type: decompress 00:12:02.467 Transfer size: 111250 bytes 00:12:02.467 Vector count 1 00:12:02.467 Module: software 00:12:02.467 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:02.467 Queue depth: 32 00:12:02.467 Allocate depth: 32 00:12:02.467 # threads/core: 1 00:12:02.467 Run time: 1 seconds 00:12:02.467 Verify: Yes 00:12:02.467 00:12:02.467 Running for 1 seconds... 00:12:02.467 00:12:02.467 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:02.467 ------------------------------------------------------------------------------------ 00:12:02.467 0,0 4992/s 206 MiB/s 0 0 00:12:02.467 3,0 5120/s 211 MiB/s 0 0 00:12:02.467 2,0 5120/s 211 MiB/s 0 0 00:12:02.467 1,0 5120/s 211 MiB/s 0 0 00:12:02.467 ==================================================================================== 00:12:02.467 Total 20352/s 2159 MiB/s 0 0' 00:12:02.467 10:25:55 -- accel/accel.sh@20 -- # IFS=: 00:12:02.467 10:25:55 -- accel/accel.sh@20 -- # read -r var val 00:12:02.467 10:25:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:02.467 10:25:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:02.467 10:25:55 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.467 10:25:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:02.467 10:25:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.467 10:25:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.467 10:25:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:02.467 10:25:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:02.468 10:25:55 -- accel/accel.sh@41 -- # local IFS=, 00:12:02.468 10:25:55 -- accel/accel.sh@42 -- # jq -r . 00:12:02.468 [2024-07-12 10:25:55.903939] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:02.468 [2024-07-12 10:25:55.904125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110496 ] 00:12:02.468 [2024-07-12 10:25:56.088278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.468 [2024-07-12 10:25:56.339661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.468 [2024-07-12 10:25:56.339743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.468 [2024-07-12 10:25:56.339907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.468 [2024-07-12 10:25:56.339905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=0xf 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=decompress 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=software 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@23 -- # accel_module=software 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=32 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=32 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=1 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val=Yes 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:02.726 10:25:56 -- accel/accel.sh@21 -- # val= 00:12:02.726 10:25:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # IFS=: 00:12:02.726 10:25:56 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@21 -- # val= 00:12:04.658 10:25:58 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # IFS=: 00:12:04.658 10:25:58 -- accel/accel.sh@20 -- # read -r var val 00:12:04.658 10:25:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:04.658 10:25:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:04.658 10:25:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:04.658 ************************************ 00:12:04.658 END TEST accel_decomp_full_mcore 00:12:04.658 ************************************ 00:12:04.658 00:12:04.658 real 0m5.307s 00:12:04.658 user 0m15.091s 00:12:04.658 sys 0m0.558s 00:12:04.658 10:25:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.658 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:04.658 10:25:58 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.658 10:25:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:04.658 10:25:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:04.658 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:04.658 ************************************ 00:12:04.658 START TEST accel_decomp_mthread 00:12:04.658 ************************************ 00:12:04.658 10:25:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.658 10:25:58 -- accel/accel.sh@16 -- # local accel_opc 00:12:04.658 10:25:58 -- accel/accel.sh@17 -- # local accel_module 00:12:04.658 10:25:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.658 10:25:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.658 10:25:58 -- accel/accel.sh@12 -- # build_accel_config 00:12:04.658 10:25:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:04.658 10:25:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.658 10:25:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.658 10:25:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:04.658 10:25:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:04.658 10:25:58 -- accel/accel.sh@41 -- # local IFS=, 00:12:04.658 10:25:58 -- accel/accel.sh@42 -- # jq -r . 00:12:04.916 [2024-07-12 10:25:58.588658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:04.916 [2024-07-12 10:25:58.588850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110561 ] 00:12:04.916 [2024-07-12 10:25:58.747910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.174 [2024-07-12 10:25:59.007173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.704 10:26:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:07.705 00:12:07.705 SPDK Configuration: 00:12:07.705 Core mask: 0x1 00:12:07.705 00:12:07.705 Accel Perf Configuration: 00:12:07.705 Workload Type: decompress 00:12:07.705 Transfer size: 4096 bytes 00:12:07.705 Vector count 1 00:12:07.705 Module: software 00:12:07.705 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:07.705 Queue depth: 32 00:12:07.705 Allocate depth: 32 00:12:07.705 # threads/core: 2 00:12:07.705 Run time: 1 seconds 00:12:07.705 Verify: Yes 00:12:07.705 00:12:07.705 Running for 1 seconds... 00:12:07.705 00:12:07.705 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:07.705 ------------------------------------------------------------------------------------ 00:12:07.705 0,1 33056/s 60 MiB/s 0 0 00:12:07.705 0,0 32928/s 60 MiB/s 0 0 00:12:07.705 ==================================================================================== 00:12:07.705 Total 65984/s 257 MiB/s 0 0' 00:12:07.705 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.705 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.705 10:26:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.705 10:26:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.705 10:26:01 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.705 10:26:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.705 10:26:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.705 10:26:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.705 10:26:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.705 10:26:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.705 10:26:01 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.705 10:26:01 -- accel/accel.sh@42 -- # jq -r . 00:12:07.705 [2024-07-12 10:26:01.180841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:07.705 [2024-07-12 10:26:01.181168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110602 ] 00:12:07.705 [2024-07-12 10:26:01.348115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.705 [2024-07-12 10:26:01.534908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=0x1 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=decompress 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=software 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@23 -- # accel_module=software 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=32 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.962 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.962 10:26:01 -- accel/accel.sh@21 -- # val=32 00:12:07.962 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.963 10:26:01 -- accel/accel.sh@21 -- # val=2 00:12:07.963 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.963 10:26:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:07.963 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.963 10:26:01 -- accel/accel.sh@21 -- # val=Yes 00:12:07.963 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.963 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.963 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.963 10:26:01 -- accel/accel.sh@21 -- # val= 00:12:07.963 10:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # IFS=: 00:12:07.963 10:26:01 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@21 -- # val= 00:12:09.867 10:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # IFS=: 00:12:09.867 10:26:03 -- accel/accel.sh@20 -- # read -r var val 00:12:09.867 10:26:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:09.867 10:26:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:09.867 10:26:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:09.867 00:12:09.867 real 0m4.879s 00:12:09.867 user 0m4.335s 00:12:09.867 sys 0m0.401s 00:12:09.867 10:26:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.867 ************************************ 00:12:09.867 END TEST accel_decomp_mthread 00:12:09.867 ************************************ 00:12:09.867 10:26:03 -- common/autotest_common.sh@10 -- # set +x 00:12:09.867 10:26:03 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.867 10:26:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:09.867 10:26:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.867 10:26:03 -- common/autotest_common.sh@10 -- # set +x 00:12:09.867 ************************************ 00:12:09.867 START TEST accel_deomp_full_mthread 00:12:09.867 ************************************ 00:12:09.867 10:26:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.867 10:26:03 -- accel/accel.sh@16 -- # local accel_opc 00:12:09.867 10:26:03 -- accel/accel.sh@17 -- # local accel_module 00:12:09.867 10:26:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.867 10:26:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.867 10:26:03 -- accel/accel.sh@12 -- # build_accel_config 00:12:09.868 10:26:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:09.868 10:26:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.868 10:26:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.868 10:26:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:09.868 10:26:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:09.868 10:26:03 -- accel/accel.sh@41 -- # local IFS=, 00:12:09.868 10:26:03 -- accel/accel.sh@42 -- # jq -r . 00:12:09.868 [2024-07-12 10:26:03.523933] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:09.868 [2024-07-12 10:26:03.524164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110650 ] 00:12:09.868 [2024-07-12 10:26:03.690941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.126 [2024-07-12 10:26:03.867363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.036 10:26:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:12.036 00:12:12.036 SPDK Configuration: 00:12:12.036 Core mask: 0x1 00:12:12.036 00:12:12.036 Accel Perf Configuration: 00:12:12.036 Workload Type: decompress 00:12:12.036 Transfer size: 111250 bytes 00:12:12.036 Vector count 1 00:12:12.036 Module: software 00:12:12.036 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:12.036 Queue depth: 32 00:12:12.036 Allocate depth: 32 00:12:12.036 # threads/core: 2 00:12:12.036 Run time: 1 seconds 00:12:12.036 Verify: Yes 00:12:12.036 00:12:12.036 Running for 1 seconds... 00:12:12.036 00:12:12.036 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:12.036 ------------------------------------------------------------------------------------ 00:12:12.036 0,1 2848/s 117 MiB/s 0 0 00:12:12.036 0,0 2784/s 115 MiB/s 0 0 00:12:12.036 ==================================================================================== 00:12:12.036 Total 5632/s 597 MiB/s 0 0' 00:12:12.036 10:26:05 -- accel/accel.sh@20 -- # IFS=: 00:12:12.036 10:26:05 -- accel/accel.sh@20 -- # read -r var val 00:12:12.036 10:26:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:12.036 10:26:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:12.036 10:26:05 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.036 10:26:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.036 10:26:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.036 10:26:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.036 10:26:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.036 10:26:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.036 10:26:05 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.036 10:26:05 -- accel/accel.sh@42 -- # jq -r . 00:12:12.036 [2024-07-12 10:26:05.828083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:12.036 [2024-07-12 10:26:05.828262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110683 ] 00:12:12.294 [2024-07-12 10:26:05.996022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.294 [2024-07-12 10:26:06.168731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=0x1 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=decompress 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=software 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@23 -- # accel_module=software 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=32 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=32 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=2 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val=Yes 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:12.553 10:26:06 -- accel/accel.sh@21 -- # val= 00:12:12.553 10:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # IFS=: 00:12:12.553 10:26:06 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@21 -- # val= 00:12:14.454 10:26:08 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # IFS=: 00:12:14.454 10:26:08 -- accel/accel.sh@20 -- # read -r var val 00:12:14.454 10:26:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:14.454 10:26:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:14.454 10:26:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.454 00:12:14.454 real 0m4.610s 00:12:14.454 user 0m4.089s 00:12:14.454 sys 0m0.376s 00:12:14.454 10:26:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.454 ************************************ 00:12:14.454 END TEST accel_deomp_full_mthread 00:12:14.454 ************************************ 00:12:14.454 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:12:14.454 10:26:08 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:14.454 10:26:08 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:14.454 10:26:08 -- accel/accel.sh@129 -- # build_accel_config 00:12:14.454 10:26:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:14.454 10:26:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.454 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:12:14.454 10:26:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:14.454 10:26:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.454 10:26:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.454 10:26:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:14.454 10:26:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:14.454 10:26:08 -- accel/accel.sh@41 -- # local IFS=, 00:12:14.454 10:26:08 -- accel/accel.sh@42 -- # jq -r . 00:12:14.454 ************************************ 00:12:14.454 START TEST accel_dif_functional_tests 00:12:14.454 ************************************ 00:12:14.454 10:26:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:14.454 [2024-07-12 10:26:08.217813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:14.454 [2024-07-12 10:26:08.218538] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110760 ] 00:12:14.713 [2024-07-12 10:26:08.394227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:14.713 [2024-07-12 10:26:08.552043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.713 [2024-07-12 10:26:08.552188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.713 [2024-07-12 10:26:08.552167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.972 00:12:14.972 00:12:14.972 CUnit - A unit testing framework for C - Version 2.1-3 00:12:14.972 http://cunit.sourceforge.net/ 00:12:14.972 00:12:14.972 00:12:14.972 Suite: accel_dif 00:12:14.972 Test: verify: DIF generated, GUARD check ...passed 00:12:14.972 Test: verify: DIF generated, APPTAG check ...passed 00:12:14.972 Test: verify: DIF generated, REFTAG check ...passed 00:12:14.972 Test: verify: DIF not generated, GUARD check ...[2024-07-12 10:26:08.816473] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:14.972 passed 00:12:14.972 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 10:26:08.816629] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:14.972 [2024-07-12 10:26:08.816737] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:14.972 passed 00:12:14.972 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 10:26:08.816795] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:14.972 [2024-07-12 10:26:08.816871] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:14.972 passed 00:12:14.972 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:14.972 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 10:26:08.816941] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:14.972 [2024-07-12 10:26:08.817117] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:14.972 passed 00:12:14.972 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:14.972 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:14.972 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:14.972 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 10:26:08.817359] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:14.972 passed 00:12:14.972 Test: generate copy: DIF generated, GUARD check ...passed 00:12:14.972 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:14.972 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:14.972 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:14.972 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:14.972 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:14.972 Test: generate copy: iovecs-len validate ...[2024-07-12 10:26:08.817802] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:14.972 passed 00:12:14.972 Test: generate copy: buffer alignment validate ...passed 00:12:14.972 00:12:14.972 Run Summary: Type Total Ran Passed Failed Inactive 00:12:14.972 suites 1 1 n/a 0 0 00:12:14.972 tests 20 20 20 0 0 00:12:14.972 asserts 204 204 204 0 n/a 00:12:14.972 00:12:14.972 Elapsed time = 0.001 seconds 00:12:15.907 00:12:15.907 real 0m1.624s 00:12:15.907 user 0m3.037s 00:12:15.907 sys 0m0.300s 00:12:15.907 ************************************ 00:12:15.907 END TEST accel_dif_functional_tests 00:12:15.907 ************************************ 00:12:15.907 10:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.907 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:15.907 00:12:15.907 real 1m45.132s 00:12:15.907 user 1m55.757s 00:12:15.907 sys 0m9.393s 00:12:15.908 10:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.908 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:15.908 ************************************ 00:12:15.908 END TEST accel 00:12:15.908 ************************************ 00:12:16.165 10:26:09 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:16.165 10:26:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:16.165 10:26:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.165 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:16.165 ************************************ 00:12:16.165 START TEST accel_rpc 00:12:16.165 ************************************ 00:12:16.165 10:26:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:16.165 * Looking for test storage... 00:12:16.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:16.165 10:26:09 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:16.165 10:26:09 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=110838 00:12:16.165 10:26:09 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:16.165 10:26:09 -- accel/accel_rpc.sh@15 -- # waitforlisten 110838 00:12:16.165 10:26:09 -- common/autotest_common.sh@819 -- # '[' -z 110838 ']' 00:12:16.165 10:26:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.165 10:26:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:16.165 10:26:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.165 10:26:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:16.165 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:16.165 [2024-07-12 10:26:09.969199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:16.165 [2024-07-12 10:26:09.969392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110838 ] 00:12:16.423 [2024-07-12 10:26:10.119238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.423 [2024-07-12 10:26:10.288152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:16.423 [2024-07-12 10:26:10.288384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.988 10:26:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:16.988 10:26:10 -- common/autotest_common.sh@852 -- # return 0 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:16.988 10:26:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:16.988 10:26:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.988 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:12:16.988 ************************************ 00:12:16.988 START TEST accel_assign_opcode 00:12:16.988 ************************************ 00:12:16.988 10:26:10 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:16.988 10:26:10 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:16.988 10:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.988 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:12:16.988 [2024-07-12 10:26:10.909041] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:17.247 10:26:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.247 10:26:10 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:17.247 10:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.247 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 [2024-07-12 10:26:10.917029] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:17.247 10:26:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.247 10:26:10 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:17.247 10:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.247 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 10:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.814 10:26:11 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:17.814 10:26:11 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:17.814 10:26:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.814 10:26:11 -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 10:26:11 -- accel/accel_rpc.sh@42 -- # grep software 00:12:17.814 10:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.814 software 00:12:17.814 ************************************ 00:12:17.814 END TEST accel_assign_opcode 00:12:17.814 ************************************ 00:12:17.814 00:12:17.814 real 0m0.708s 00:12:17.814 user 0m0.059s 00:12:17.814 sys 0m0.008s 00:12:17.814 10:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.814 10:26:11 -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 10:26:11 -- accel/accel_rpc.sh@55 -- # killprocess 110838 00:12:17.814 10:26:11 -- common/autotest_common.sh@926 -- # '[' -z 110838 ']' 00:12:17.814 10:26:11 -- common/autotest_common.sh@930 -- # kill -0 110838 00:12:17.814 10:26:11 -- common/autotest_common.sh@931 -- # uname 00:12:17.814 10:26:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:17.814 10:26:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110838 00:12:17.814 killing process with pid 110838 00:12:17.814 10:26:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:17.814 10:26:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:17.814 10:26:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110838' 00:12:17.814 10:26:11 -- common/autotest_common.sh@945 -- # kill 110838 00:12:17.814 10:26:11 -- common/autotest_common.sh@950 -- # wait 110838 00:12:19.714 00:12:19.714 real 0m3.568s 00:12:19.714 user 0m3.559s 00:12:19.714 sys 0m0.495s 00:12:19.714 10:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.714 ************************************ 00:12:19.714 10:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.714 END TEST accel_rpc 00:12:19.714 ************************************ 00:12:19.714 10:26:13 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:19.714 10:26:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:19.714 10:26:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:19.714 10:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.714 ************************************ 00:12:19.714 START TEST app_cmdline 00:12:19.714 ************************************ 00:12:19.714 10:26:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:19.714 * Looking for test storage... 00:12:19.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:19.714 10:26:13 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:19.714 10:26:13 -- app/cmdline.sh@17 -- # spdk_tgt_pid=110965 00:12:19.714 10:26:13 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:19.714 10:26:13 -- app/cmdline.sh@18 -- # waitforlisten 110965 00:12:19.714 10:26:13 -- common/autotest_common.sh@819 -- # '[' -z 110965 ']' 00:12:19.714 10:26:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.714 10:26:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:19.714 10:26:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.714 10:26:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:19.714 10:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.714 [2024-07-12 10:26:13.595441] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:19.714 [2024-07-12 10:26:13.595619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110965 ] 00:12:19.972 [2024-07-12 10:26:13.763658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.231 [2024-07-12 10:26:13.922479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:20.231 [2024-07-12 10:26:13.922727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.604 10:26:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:21.604 10:26:15 -- common/autotest_common.sh@852 -- # return 0 00:12:21.604 10:26:15 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:21.605 { 00:12:21.605 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:12:21.605 "fields": { 00:12:21.605 "major": 24, 00:12:21.605 "minor": 1, 00:12:21.605 "patch": 1, 00:12:21.605 "suffix": "-pre", 00:12:21.605 "commit": "4b94202c6" 00:12:21.605 } 00:12:21.605 } 00:12:21.605 10:26:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:21.605 10:26:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:21.605 10:26:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:21.605 10:26:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:21.605 10:26:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:21.605 10:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.605 10:26:15 -- common/autotest_common.sh@10 -- # set +x 00:12:21.605 10:26:15 -- app/cmdline.sh@26 -- # sort 00:12:21.605 10:26:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:21.605 10:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.605 10:26:15 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:21.605 10:26:15 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:21.605 10:26:15 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:21.605 10:26:15 -- common/autotest_common.sh@640 -- # local es=0 00:12:21.605 10:26:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:21.605 10:26:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.605 10:26:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:21.605 10:26:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.605 10:26:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:21.605 10:26:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.605 10:26:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:21.605 10:26:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.605 10:26:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:21.605 10:26:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:21.863 request: 00:12:21.863 { 00:12:21.863 "method": "env_dpdk_get_mem_stats", 00:12:21.863 "req_id": 1 00:12:21.863 } 00:12:21.863 Got JSON-RPC error response 00:12:21.863 response: 00:12:21.863 { 00:12:21.863 "code": -32601, 00:12:21.863 "message": "Method not found" 00:12:21.863 } 00:12:21.863 10:26:15 -- common/autotest_common.sh@643 -- # es=1 00:12:21.863 10:26:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:21.863 10:26:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:21.863 10:26:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:21.863 10:26:15 -- app/cmdline.sh@1 -- # killprocess 110965 00:12:21.863 10:26:15 -- common/autotest_common.sh@926 -- # '[' -z 110965 ']' 00:12:21.863 10:26:15 -- common/autotest_common.sh@930 -- # kill -0 110965 00:12:21.863 10:26:15 -- common/autotest_common.sh@931 -- # uname 00:12:21.863 10:26:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.863 10:26:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110965 00:12:21.863 killing process with pid 110965 00:12:21.863 10:26:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:21.863 10:26:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:21.863 10:26:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110965' 00:12:21.863 10:26:15 -- common/autotest_common.sh@945 -- # kill 110965 00:12:21.863 10:26:15 -- common/autotest_common.sh@950 -- # wait 110965 00:12:23.765 00:12:23.765 real 0m4.040s 00:12:23.765 user 0m4.636s 00:12:23.765 sys 0m0.479s 00:12:23.765 10:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.765 10:26:17 -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 ************************************ 00:12:23.765 END TEST app_cmdline 00:12:23.765 ************************************ 00:12:23.765 10:26:17 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:23.765 10:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:23.765 10:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.765 10:26:17 -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 ************************************ 00:12:23.765 START TEST version 00:12:23.765 ************************************ 00:12:23.765 10:26:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:23.765 * Looking for test storage... 00:12:23.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:23.765 10:26:17 -- app/version.sh@17 -- # get_header_version major 00:12:23.765 10:26:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.765 10:26:17 -- app/version.sh@14 -- # tr -d '"' 00:12:23.765 10:26:17 -- app/version.sh@14 -- # cut -f2 00:12:23.765 10:26:17 -- app/version.sh@17 -- # major=24 00:12:23.765 10:26:17 -- app/version.sh@18 -- # get_header_version minor 00:12:23.765 10:26:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.765 10:26:17 -- app/version.sh@14 -- # cut -f2 00:12:23.766 10:26:17 -- app/version.sh@14 -- # tr -d '"' 00:12:23.766 10:26:17 -- app/version.sh@18 -- # minor=1 00:12:23.766 10:26:17 -- app/version.sh@19 -- # get_header_version patch 00:12:23.766 10:26:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.766 10:26:17 -- app/version.sh@14 -- # cut -f2 00:12:23.766 10:26:17 -- app/version.sh@14 -- # tr -d '"' 00:12:23.766 10:26:17 -- app/version.sh@19 -- # patch=1 00:12:23.766 10:26:17 -- app/version.sh@20 -- # get_header_version suffix 00:12:23.766 10:26:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.766 10:26:17 -- app/version.sh@14 -- # cut -f2 00:12:23.766 10:26:17 -- app/version.sh@14 -- # tr -d '"' 00:12:23.766 10:26:17 -- app/version.sh@20 -- # suffix=-pre 00:12:23.766 10:26:17 -- app/version.sh@22 -- # version=24.1 00:12:23.766 10:26:17 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:23.766 10:26:17 -- app/version.sh@25 -- # version=24.1.1 00:12:23.766 10:26:17 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:23.766 10:26:17 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:23.766 10:26:17 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:23.766 10:26:17 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:23.766 10:26:17 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:23.766 00:12:23.766 real 0m0.133s 00:12:23.766 user 0m0.107s 00:12:23.766 sys 0m0.055s 00:12:23.766 10:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.766 ************************************ 00:12:23.766 END TEST version 00:12:23.766 10:26:17 -- common/autotest_common.sh@10 -- # set +x 00:12:23.766 ************************************ 00:12:24.025 10:26:17 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:24.025 10:26:17 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:24.025 10:26:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:24.025 10:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.025 10:26:17 -- common/autotest_common.sh@10 -- # set +x 00:12:24.025 ************************************ 00:12:24.025 START TEST blockdev_general 00:12:24.025 ************************************ 00:12:24.025 10:26:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:24.025 * Looking for test storage... 00:12:24.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:24.025 10:26:17 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:24.025 10:26:17 -- bdev/nbd_common.sh@6 -- # set -e 00:12:24.025 10:26:17 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:24.025 10:26:17 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:24.025 10:26:17 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:24.025 10:26:17 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:24.025 10:26:17 -- bdev/blockdev.sh@18 -- # : 00:12:24.025 10:26:17 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:24.025 10:26:17 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:24.025 10:26:17 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:24.025 10:26:17 -- bdev/blockdev.sh@672 -- # uname -s 00:12:24.025 10:26:17 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:24.025 10:26:17 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:24.025 10:26:17 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:24.025 10:26:17 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:24.025 10:26:17 -- bdev/blockdev.sh@682 -- # dek= 00:12:24.025 10:26:17 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:24.025 10:26:17 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:24.025 10:26:17 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:24.025 10:26:17 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:24.025 10:26:17 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:24.025 10:26:17 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:24.025 10:26:17 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=111166 00:12:24.025 10:26:17 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:24.025 10:26:17 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:24.025 10:26:17 -- bdev/blockdev.sh@47 -- # waitforlisten 111166 00:12:24.025 10:26:17 -- common/autotest_common.sh@819 -- # '[' -z 111166 ']' 00:12:24.025 10:26:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.025 10:26:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.025 10:26:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.025 10:26:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.025 10:26:17 -- common/autotest_common.sh@10 -- # set +x 00:12:24.025 [2024-07-12 10:26:17.883929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:24.025 [2024-07-12 10:26:17.884118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111166 ] 00:12:24.284 [2024-07-12 10:26:18.049295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.543 [2024-07-12 10:26:18.227608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.543 [2024-07-12 10:26:18.227873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.110 10:26:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.110 10:26:18 -- common/autotest_common.sh@852 -- # return 0 00:12:25.110 10:26:18 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:25.110 10:26:18 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:25.110 10:26:18 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:25.110 10:26:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.110 10:26:18 -- common/autotest_common.sh@10 -- # set +x 00:12:25.677 [2024-07-12 10:26:19.415943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:25.677 [2024-07-12 10:26:19.416026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:25.677 00:12:25.677 [2024-07-12 10:26:19.423912] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:25.677 [2024-07-12 10:26:19.423969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:25.677 00:12:25.677 Malloc0 00:12:25.677 Malloc1 00:12:25.677 Malloc2 00:12:25.678 Malloc3 00:12:25.936 Malloc4 00:12:25.936 Malloc5 00:12:25.936 Malloc6 00:12:25.936 Malloc7 00:12:25.936 Malloc8 00:12:25.936 Malloc9 00:12:25.936 [2024-07-12 10:26:19.784166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:25.936 [2024-07-12 10:26:19.784249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.936 [2024-07-12 10:26:19.784283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:25.937 [2024-07-12 10:26:19.784317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.937 [2024-07-12 10:26:19.786646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.937 [2024-07-12 10:26:19.786700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:25.937 TestPT 00:12:25.937 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.937 10:26:19 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:25.937 5000+0 records in 00:12:25.937 5000+0 records out 00:12:25.937 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0257566 s, 398 MB/s 00:12:25.937 10:26:19 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:25.937 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.937 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 AIO0 00:12:26.196 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.196 10:26:19 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:26.196 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.196 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.196 10:26:19 -- bdev/blockdev.sh@738 -- # cat 00:12:26.196 10:26:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:26.196 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.196 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.196 10:26:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:26.196 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.196 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.196 10:26:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:26.196 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.196 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 10:26:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.197 10:26:19 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:26.197 10:26:19 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:26.197 10:26:19 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:26.197 10:26:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.197 10:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:26.197 10:26:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.197 10:26:20 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:26.197 10:26:20 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:26.198 10:26:20 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ef05c313-e0f1-46ff-82a9-975adafd3581"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef05c313-e0f1-46ff-82a9-975adafd3581",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4ece179e-ca4a-5951-b8ba-d332b297cbee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4ece179e-ca4a-5951-b8ba-d332b297cbee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f5e89f13-0bf1-5921-b161-f069dd9f7943"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f5e89f13-0bf1-5921-b161-f069dd9f7943",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b333a681-153d-50e7-a2a5-c2f6018eec6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b333a681-153d-50e7-a2a5-c2f6018eec6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a67bd11-f11a-5df3-87e0-103f10bb8d7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a67bd11-f11a-5df3-87e0-103f10bb8d7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1f65182c-9863-5b77-806c-1d963eb76f42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1f65182c-9863-5b77-806c-1d963eb76f42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "e64bc099-6eab-5211-9d09-fe10b06fa1d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e64bc099-6eab-5211-9d09-fe10b06fa1d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d183281f-6a14-538c-ae56-c9f0a394e340"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d183281f-6a14-538c-ae56-c9f0a394e340",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0214e2cc-0b39-5760-a47a-b771dcb6bafd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0214e2cc-0b39-5760-a47a-b771dcb6bafd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06e8e03f-01a7-50f9-8f3a-89f1558d94ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06e8e03f-01a7-50f9-8f3a-89f1558d94ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9a7b985e-4f46-5d62-a341-e2140cfa95bf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a7b985e-4f46-5d62-a341-e2140cfa95bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3b84cf8-134a-4310-85d8-e866ea9525dd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6841f909-cd63-4484-bc50-f1f80b539a5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "6c2393f1-fb7f-4083-b667-9a99ca25ac7c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "773e36b1-c7dc-40c4-8a5c-d5d187339045"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f10a349f-a00e-4df4-bd23-cbe391110f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "62cf98b6-68da-411c-a4c3-081f13825df4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "36ae2617-d3b3-49c4-bf12-51c18accd4ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e8e7d7cf-a624-47f7-9657-509450fdd508",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ce052616-d99a-4313-8e43-898fdacf7d12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "545b2484-244f-4316-af9f-36f6d10604be"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "545b2484-244f-4316-af9f-36f6d10604be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:26.198 10:26:20 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:26.198 10:26:20 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:26.198 10:26:20 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:26.198 10:26:20 -- bdev/blockdev.sh@752 -- # killprocess 111166 00:12:26.198 10:26:20 -- common/autotest_common.sh@926 -- # '[' -z 111166 ']' 00:12:26.198 10:26:20 -- common/autotest_common.sh@930 -- # kill -0 111166 00:12:26.198 10:26:20 -- common/autotest_common.sh@931 -- # uname 00:12:26.456 10:26:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.456 10:26:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111166 00:12:26.456 10:26:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.456 10:26:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.456 10:26:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111166' 00:12:26.456 killing process with pid 111166 00:12:26.456 10:26:20 -- common/autotest_common.sh@945 -- # kill 111166 00:12:26.456 10:26:20 -- common/autotest_common.sh@950 -- # wait 111166 00:12:28.989 10:26:22 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.989 10:26:22 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.989 10:26:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:28.989 10:26:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.989 10:26:22 -- common/autotest_common.sh@10 -- # set +x 00:12:28.989 ************************************ 00:12:28.989 START TEST bdev_hello_world 00:12:28.989 ************************************ 00:12:28.989 10:26:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.989 [2024-07-12 10:26:22.656615] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:28.989 [2024-07-12 10:26:22.656807] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111257 ] 00:12:28.989 [2024-07-12 10:26:22.824668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.253 [2024-07-12 10:26:22.978746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.510 [2024-07-12 10:26:23.300226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.510 [2024-07-12 10:26:23.300313] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.510 [2024-07-12 10:26:23.308192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.510 [2024-07-12 10:26:23.308262] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.510 [2024-07-12 10:26:23.316222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.510 [2024-07-12 10:26:23.316267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:29.510 [2024-07-12 10:26:23.316300] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:29.768 [2024-07-12 10:26:23.483392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.768 [2024-07-12 10:26:23.483530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.768 [2024-07-12 10:26:23.483591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:29.768 [2024-07-12 10:26:23.483624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.768 [2024-07-12 10:26:23.485863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.768 [2024-07-12 10:26:23.485922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:30.026 [2024-07-12 10:26:23.765013] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:30.026 [2024-07-12 10:26:23.765157] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:30.026 [2024-07-12 10:26:23.765235] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:30.026 [2024-07-12 10:26:23.765290] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:30.026 [2024-07-12 10:26:23.765369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:30.026 [2024-07-12 10:26:23.765401] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:30.026 [2024-07-12 10:26:23.765490] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:30.026 00:12:30.026 [2024-07-12 10:26:23.765560] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:31.928 ************************************ 00:12:31.928 END TEST bdev_hello_world 00:12:31.928 ************************************ 00:12:31.928 00:12:31.928 real 0m2.770s 00:12:31.928 user 0m2.287s 00:12:31.928 sys 0m0.337s 00:12:31.928 10:26:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.928 10:26:25 -- common/autotest_common.sh@10 -- # set +x 00:12:31.928 10:26:25 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:31.928 10:26:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:31.928 10:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.928 10:26:25 -- common/autotest_common.sh@10 -- # set +x 00:12:31.928 ************************************ 00:12:31.928 START TEST bdev_bounds 00:12:31.928 ************************************ 00:12:31.928 10:26:25 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:31.928 10:26:25 -- bdev/blockdev.sh@288 -- # bdevio_pid=111313 00:12:31.928 Process bdevio pid: 111313 00:12:31.928 10:26:25 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:31.928 10:26:25 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 111313' 00:12:31.928 10:26:25 -- bdev/blockdev.sh@291 -- # waitforlisten 111313 00:12:31.928 10:26:25 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:31.928 10:26:25 -- common/autotest_common.sh@819 -- # '[' -z 111313 ']' 00:12:31.928 10:26:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.928 10:26:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:31.928 10:26:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.928 10:26:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:31.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.928 10:26:25 -- common/autotest_common.sh@10 -- # set +x 00:12:31.928 [2024-07-12 10:26:25.478633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:31.928 [2024-07-12 10:26:25.478841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111313 ] 00:12:31.928 [2024-07-12 10:26:25.654859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.928 [2024-07-12 10:26:25.812894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.928 [2024-07-12 10:26:25.813050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.928 [2024-07-12 10:26:25.813029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.496 [2024-07-12 10:26:26.153056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:32.496 [2024-07-12 10:26:26.153192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:32.496 [2024-07-12 10:26:26.161023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:32.496 [2024-07-12 10:26:26.161143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:32.496 [2024-07-12 10:26:26.169047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.496 [2024-07-12 10:26:26.169148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:32.496 [2024-07-12 10:26:26.169176] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:32.496 [2024-07-12 10:26:26.360648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.497 [2024-07-12 10:26:26.360814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.497 [2024-07-12 10:26:26.360877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:32.497 [2024-07-12 10:26:26.360902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.497 [2024-07-12 10:26:26.363297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.497 [2024-07-12 10:26:26.363405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:33.433 10:26:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:33.433 10:26:27 -- common/autotest_common.sh@852 -- # return 0 00:12:33.434 10:26:27 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:33.434 I/O targets: 00:12:33.434 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:33.434 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:33.434 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:33.434 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:33.434 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:33.434 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:33.434 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:33.434 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:33.434 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:33.434 00:12:33.434 00:12:33.434 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.434 http://cunit.sourceforge.net/ 00:12:33.434 00:12:33.434 00:12:33.434 Suite: bdevio tests on: AIO0 00:12:33.434 Test: blockdev write read block ...passed 00:12:33.434 Test: blockdev write zeroes read block ...passed 00:12:33.434 Test: blockdev write zeroes read no split ...passed 00:12:33.434 Test: blockdev write zeroes read split ...passed 00:12:33.434 Test: blockdev write zeroes read split partial ...passed 00:12:33.434 Test: blockdev reset ...passed 00:12:33.434 Test: blockdev write read 8 blocks ...passed 00:12:33.434 Test: blockdev write read size > 128k ...passed 00:12:33.434 Test: blockdev write read invalid size ...passed 00:12:33.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.434 Test: blockdev write read max offset ...passed 00:12:33.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.434 Test: blockdev writev readv 8 blocks ...passed 00:12:33.434 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.434 Test: blockdev writev readv block ...passed 00:12:33.434 Test: blockdev writev readv size > 128k ...passed 00:12:33.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.434 Test: blockdev comparev and writev ...passed 00:12:33.434 Test: blockdev nvme passthru rw ...passed 00:12:33.434 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.434 Test: blockdev nvme admin passthru ...passed 00:12:33.434 Test: blockdev copy ...passed 00:12:33.434 Suite: bdevio tests on: raid1 00:12:33.434 Test: blockdev write read block ...passed 00:12:33.434 Test: blockdev write zeroes read block ...passed 00:12:33.434 Test: blockdev write zeroes read no split ...passed 00:12:33.434 Test: blockdev write zeroes read split ...passed 00:12:33.434 Test: blockdev write zeroes read split partial ...passed 00:12:33.434 Test: blockdev reset ...passed 00:12:33.434 Test: blockdev write read 8 blocks ...passed 00:12:33.434 Test: blockdev write read size > 128k ...passed 00:12:33.434 Test: blockdev write read invalid size ...passed 00:12:33.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.434 Test: blockdev write read max offset ...passed 00:12:33.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.434 Test: blockdev writev readv 8 blocks ...passed 00:12:33.434 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.434 Test: blockdev writev readv block ...passed 00:12:33.434 Test: blockdev writev readv size > 128k ...passed 00:12:33.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.434 Test: blockdev comparev and writev ...passed 00:12:33.434 Test: blockdev nvme passthru rw ...passed 00:12:33.434 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.434 Test: blockdev nvme admin passthru ...passed 00:12:33.434 Test: blockdev copy ...passed 00:12:33.434 Suite: bdevio tests on: concat0 00:12:33.434 Test: blockdev write read block ...passed 00:12:33.434 Test: blockdev write zeroes read block ...passed 00:12:33.434 Test: blockdev write zeroes read no split ...passed 00:12:33.434 Test: blockdev write zeroes read split ...passed 00:12:33.434 Test: blockdev write zeroes read split partial ...passed 00:12:33.434 Test: blockdev reset ...passed 00:12:33.434 Test: blockdev write read 8 blocks ...passed 00:12:33.434 Test: blockdev write read size > 128k ...passed 00:12:33.434 Test: blockdev write read invalid size ...passed 00:12:33.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.434 Test: blockdev write read max offset ...passed 00:12:33.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.434 Test: blockdev writev readv 8 blocks ...passed 00:12:33.434 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.434 Test: blockdev writev readv block ...passed 00:12:33.434 Test: blockdev writev readv size > 128k ...passed 00:12:33.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.434 Test: blockdev comparev and writev ...passed 00:12:33.434 Test: blockdev nvme passthru rw ...passed 00:12:33.434 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.434 Test: blockdev nvme admin passthru ...passed 00:12:33.434 Test: blockdev copy ...passed 00:12:33.434 Suite: bdevio tests on: raid0 00:12:33.434 Test: blockdev write read block ...passed 00:12:33.434 Test: blockdev write zeroes read block ...passed 00:12:33.434 Test: blockdev write zeroes read no split ...passed 00:12:33.434 Test: blockdev write zeroes read split ...passed 00:12:33.703 Test: blockdev write zeroes read split partial ...passed 00:12:33.703 Test: blockdev reset ...passed 00:12:33.703 Test: blockdev write read 8 blocks ...passed 00:12:33.703 Test: blockdev write read size > 128k ...passed 00:12:33.703 Test: blockdev write read invalid size ...passed 00:12:33.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.703 Test: blockdev write read max offset ...passed 00:12:33.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.703 Test: blockdev writev readv 8 blocks ...passed 00:12:33.703 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.703 Test: blockdev writev readv block ...passed 00:12:33.703 Test: blockdev writev readv size > 128k ...passed 00:12:33.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.703 Test: blockdev comparev and writev ...passed 00:12:33.703 Test: blockdev nvme passthru rw ...passed 00:12:33.703 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.703 Test: blockdev nvme admin passthru ...passed 00:12:33.703 Test: blockdev copy ...passed 00:12:33.703 Suite: bdevio tests on: TestPT 00:12:33.703 Test: blockdev write read block ...passed 00:12:33.703 Test: blockdev write zeroes read block ...passed 00:12:33.703 Test: blockdev write zeroes read no split ...passed 00:12:33.703 Test: blockdev write zeroes read split ...passed 00:12:33.703 Test: blockdev write zeroes read split partial ...passed 00:12:33.703 Test: blockdev reset ...passed 00:12:33.703 Test: blockdev write read 8 blocks ...passed 00:12:33.703 Test: blockdev write read size > 128k ...passed 00:12:33.703 Test: blockdev write read invalid size ...passed 00:12:33.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.703 Test: blockdev write read max offset ...passed 00:12:33.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.703 Test: blockdev writev readv 8 blocks ...passed 00:12:33.703 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.703 Test: blockdev writev readv block ...passed 00:12:33.703 Test: blockdev writev readv size > 128k ...passed 00:12:33.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.703 Test: blockdev comparev and writev ...passed 00:12:33.703 Test: blockdev nvme passthru rw ...passed 00:12:33.703 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.703 Test: blockdev nvme admin passthru ...passed 00:12:33.703 Test: blockdev copy ...passed 00:12:33.703 Suite: bdevio tests on: Malloc2p7 00:12:33.703 Test: blockdev write read block ...passed 00:12:33.703 Test: blockdev write zeroes read block ...passed 00:12:33.703 Test: blockdev write zeroes read no split ...passed 00:12:33.703 Test: blockdev write zeroes read split ...passed 00:12:33.703 Test: blockdev write zeroes read split partial ...passed 00:12:33.703 Test: blockdev reset ...passed 00:12:33.703 Test: blockdev write read 8 blocks ...passed 00:12:33.703 Test: blockdev write read size > 128k ...passed 00:12:33.703 Test: blockdev write read invalid size ...passed 00:12:33.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.704 Test: blockdev write read max offset ...passed 00:12:33.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.704 Test: blockdev writev readv 8 blocks ...passed 00:12:33.704 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.704 Test: blockdev writev readv block ...passed 00:12:33.704 Test: blockdev writev readv size > 128k ...passed 00:12:33.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.704 Test: blockdev comparev and writev ...passed 00:12:33.704 Test: blockdev nvme passthru rw ...passed 00:12:33.704 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.704 Test: blockdev nvme admin passthru ...passed 00:12:33.704 Test: blockdev copy ...passed 00:12:33.704 Suite: bdevio tests on: Malloc2p6 00:12:33.704 Test: blockdev write read block ...passed 00:12:33.704 Test: blockdev write zeroes read block ...passed 00:12:33.704 Test: blockdev write zeroes read no split ...passed 00:12:33.704 Test: blockdev write zeroes read split ...passed 00:12:33.704 Test: blockdev write zeroes read split partial ...passed 00:12:33.704 Test: blockdev reset ...passed 00:12:33.704 Test: blockdev write read 8 blocks ...passed 00:12:33.704 Test: blockdev write read size > 128k ...passed 00:12:33.704 Test: blockdev write read invalid size ...passed 00:12:33.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.704 Test: blockdev write read max offset ...passed 00:12:33.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.704 Test: blockdev writev readv 8 blocks ...passed 00:12:33.704 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.704 Test: blockdev writev readv block ...passed 00:12:33.704 Test: blockdev writev readv size > 128k ...passed 00:12:33.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.704 Test: blockdev comparev and writev ...passed 00:12:33.704 Test: blockdev nvme passthru rw ...passed 00:12:33.704 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.704 Test: blockdev nvme admin passthru ...passed 00:12:33.704 Test: blockdev copy ...passed 00:12:33.704 Suite: bdevio tests on: Malloc2p5 00:12:33.704 Test: blockdev write read block ...passed 00:12:33.704 Test: blockdev write zeroes read block ...passed 00:12:33.704 Test: blockdev write zeroes read no split ...passed 00:12:33.704 Test: blockdev write zeroes read split ...passed 00:12:33.704 Test: blockdev write zeroes read split partial ...passed 00:12:33.704 Test: blockdev reset ...passed 00:12:33.704 Test: blockdev write read 8 blocks ...passed 00:12:33.704 Test: blockdev write read size > 128k ...passed 00:12:33.704 Test: blockdev write read invalid size ...passed 00:12:33.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.704 Test: blockdev write read max offset ...passed 00:12:33.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.705 Test: blockdev writev readv 8 blocks ...passed 00:12:33.705 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.705 Test: blockdev writev readv block ...passed 00:12:33.705 Test: blockdev writev readv size > 128k ...passed 00:12:33.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.705 Test: blockdev comparev and writev ...passed 00:12:33.705 Test: blockdev nvme passthru rw ...passed 00:12:33.705 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.705 Test: blockdev nvme admin passthru ...passed 00:12:33.705 Test: blockdev copy ...passed 00:12:33.705 Suite: bdevio tests on: Malloc2p4 00:12:33.705 Test: blockdev write read block ...passed 00:12:33.705 Test: blockdev write zeroes read block ...passed 00:12:33.705 Test: blockdev write zeroes read no split ...passed 00:12:33.705 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc2p3 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc2p2 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc2p1 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc2p0 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc1p1 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.971 Test: blockdev comparev and writev ...passed 00:12:33.971 Test: blockdev nvme passthru rw ...passed 00:12:33.971 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.971 Test: blockdev nvme admin passthru ...passed 00:12:33.971 Test: blockdev copy ...passed 00:12:33.971 Suite: bdevio tests on: Malloc1p0 00:12:33.971 Test: blockdev write read block ...passed 00:12:33.971 Test: blockdev write zeroes read block ...passed 00:12:33.971 Test: blockdev write zeroes read no split ...passed 00:12:33.971 Test: blockdev write zeroes read split ...passed 00:12:33.971 Test: blockdev write zeroes read split partial ...passed 00:12:33.971 Test: blockdev reset ...passed 00:12:33.971 Test: blockdev write read 8 blocks ...passed 00:12:33.971 Test: blockdev write read size > 128k ...passed 00:12:33.971 Test: blockdev write read invalid size ...passed 00:12:33.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.971 Test: blockdev write read max offset ...passed 00:12:33.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.971 Test: blockdev writev readv 8 blocks ...passed 00:12:33.971 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.971 Test: blockdev writev readv block ...passed 00:12:33.971 Test: blockdev writev readv size > 128k ...passed 00:12:33.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.972 Test: blockdev comparev and writev ...passed 00:12:33.972 Test: blockdev nvme passthru rw ...passed 00:12:33.972 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.972 Test: blockdev nvme admin passthru ...passed 00:12:33.972 Test: blockdev copy ...passed 00:12:33.972 Suite: bdevio tests on: Malloc0 00:12:33.972 Test: blockdev write read block ...passed 00:12:33.972 Test: blockdev write zeroes read block ...passed 00:12:34.229 Test: blockdev write zeroes read no split ...passed 00:12:34.229 Test: blockdev write zeroes read split ...passed 00:12:34.229 Test: blockdev write zeroes read split partial ...passed 00:12:34.229 Test: blockdev reset ...passed 00:12:34.229 Test: blockdev write read 8 blocks ...passed 00:12:34.229 Test: blockdev write read size > 128k ...passed 00:12:34.229 Test: blockdev write read invalid size ...passed 00:12:34.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.229 Test: blockdev write read max offset ...passed 00:12:34.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.229 Test: blockdev writev readv 8 blocks ...passed 00:12:34.229 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.229 Test: blockdev writev readv block ...passed 00:12:34.229 Test: blockdev writev readv size > 128k ...passed 00:12:34.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.229 Test: blockdev comparev and writev ...passed 00:12:34.229 Test: blockdev nvme passthru rw ...passed 00:12:34.229 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.229 Test: blockdev nvme admin passthru ...passed 00:12:34.229 Test: blockdev copy ...passed 00:12:34.229 00:12:34.229 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.230 suites 16 16 n/a 0 0 00:12:34.230 tests 368 368 368 0 0 00:12:34.230 asserts 2224 2224 2224 0 n/a 00:12:34.230 00:12:34.230 Elapsed time = 2.256 seconds 00:12:34.230 0 00:12:34.230 10:26:27 -- bdev/blockdev.sh@293 -- # killprocess 111313 00:12:34.230 10:26:27 -- common/autotest_common.sh@926 -- # '[' -z 111313 ']' 00:12:34.230 10:26:27 -- common/autotest_common.sh@930 -- # kill -0 111313 00:12:34.230 10:26:27 -- common/autotest_common.sh@931 -- # uname 00:12:34.230 10:26:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.230 10:26:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111313 00:12:34.230 10:26:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.230 10:26:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.230 killing process with pid 111313 00:12:34.230 10:26:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111313' 00:12:34.230 10:26:28 -- common/autotest_common.sh@945 -- # kill 111313 00:12:34.230 10:26:28 -- common/autotest_common.sh@950 -- # wait 111313 00:12:35.604 10:26:29 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:35.604 00:12:35.604 real 0m4.100s 00:12:35.604 user 0m10.612s 00:12:35.604 sys 0m0.522s 00:12:35.604 ************************************ 00:12:35.604 END TEST bdev_bounds 00:12:35.604 ************************************ 00:12:35.604 10:26:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.604 10:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:35.862 10:26:29 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.863 10:26:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:35.863 10:26:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:35.863 10:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:35.863 ************************************ 00:12:35.863 START TEST bdev_nbd 00:12:35.863 ************************************ 00:12:35.863 10:26:29 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.863 10:26:29 -- bdev/blockdev.sh@298 -- # uname -s 00:12:35.863 10:26:29 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:35.863 10:26:29 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.863 10:26:29 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:35.863 10:26:29 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:12:35.863 10:26:29 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:35.863 10:26:29 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:35.863 10:26:29 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:35.863 10:26:29 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:12:35.863 10:26:29 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:35.863 10:26:29 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:35.863 10:26:29 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:12:35.863 10:26:29 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:35.863 10:26:29 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:12:35.863 10:26:29 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:35.863 10:26:29 -- bdev/blockdev.sh@316 -- # nbd_pid=111422 00:12:35.863 10:26:29 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:35.863 10:26:29 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:35.863 10:26:29 -- bdev/blockdev.sh@318 -- # waitforlisten 111422 /var/tmp/spdk-nbd.sock 00:12:35.863 10:26:29 -- common/autotest_common.sh@819 -- # '[' -z 111422 ']' 00:12:35.863 10:26:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:35.863 10:26:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:35.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:35.863 10:26:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:35.863 10:26:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:35.863 10:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:35.863 [2024-07-12 10:26:29.617867] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:35.863 [2024-07-12 10:26:29.618059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.863 [2024-07-12 10:26:29.771065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.120 [2024-07-12 10:26:29.934879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.378 [2024-07-12 10:26:30.258812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.378 [2024-07-12 10:26:30.258937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.378 [2024-07-12 10:26:30.266757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.378 [2024-07-12 10:26:30.266852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.378 [2024-07-12 10:26:30.274760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.378 [2024-07-12 10:26:30.274827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:36.378 [2024-07-12 10:26:30.274860] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:36.636 [2024-07-12 10:26:30.461602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.636 [2024-07-12 10:26:30.461754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.636 [2024-07-12 10:26:30.461808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:36.636 [2024-07-12 10:26:30.461838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.636 [2024-07-12 10:26:30.464288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.636 [2024-07-12 10:26:30.464371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:37.571 10:26:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.571 10:26:31 -- common/autotest_common.sh@852 -- # return 0 00:12:37.571 10:26:31 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@24 -- # local i 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.571 10:26:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:37.830 10:26:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:37.830 10:26:31 -- common/autotest_common.sh@857 -- # local i 00:12:37.830 10:26:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:37.830 10:26:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:37.830 10:26:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:37.830 10:26:31 -- common/autotest_common.sh@861 -- # break 00:12:37.830 10:26:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:37.830 10:26:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:37.830 10:26:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.830 1+0 records in 00:12:37.830 1+0 records out 00:12:37.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171634 s, 23.9 MB/s 00:12:37.830 10:26:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.830 10:26:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:37.830 10:26:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.830 10:26:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:37.830 10:26:31 -- common/autotest_common.sh@877 -- # return 0 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.830 10:26:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:38.088 10:26:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:38.088 10:26:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:38.088 10:26:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:38.088 10:26:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:38.089 10:26:31 -- common/autotest_common.sh@857 -- # local i 00:12:38.089 10:26:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.089 10:26:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.089 10:26:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:38.089 10:26:31 -- common/autotest_common.sh@861 -- # break 00:12:38.089 10:26:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.089 10:26:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.089 10:26:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.089 1+0 records in 00:12:38.089 1+0 records out 00:12:38.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322642 s, 12.7 MB/s 00:12:38.089 10:26:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.089 10:26:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.089 10:26:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.089 10:26:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.089 10:26:31 -- common/autotest_common.sh@877 -- # return 0 00:12:38.089 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.089 10:26:31 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.089 10:26:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:38.349 10:26:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:38.349 10:26:32 -- common/autotest_common.sh@857 -- # local i 00:12:38.349 10:26:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.349 10:26:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.349 10:26:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:38.349 10:26:32 -- common/autotest_common.sh@861 -- # break 00:12:38.349 10:26:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.349 10:26:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.349 10:26:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.349 1+0 records in 00:12:38.349 1+0 records out 00:12:38.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460271 s, 8.9 MB/s 00:12:38.349 10:26:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.349 10:26:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.349 10:26:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.349 10:26:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.349 10:26:32 -- common/autotest_common.sh@877 -- # return 0 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.349 10:26:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:38.624 10:26:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:38.624 10:26:32 -- common/autotest_common.sh@857 -- # local i 00:12:38.624 10:26:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:38.624 10:26:32 -- common/autotest_common.sh@861 -- # break 00:12:38.624 10:26:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.624 1+0 records in 00:12:38.624 1+0 records out 00:12:38.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410442 s, 10.0 MB/s 00:12:38.624 10:26:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.624 10:26:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.624 10:26:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.624 10:26:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.624 10:26:32 -- common/autotest_common.sh@877 -- # return 0 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:38.624 10:26:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:38.624 10:26:32 -- common/autotest_common.sh@857 -- # local i 00:12:38.624 10:26:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:38.624 10:26:32 -- common/autotest_common.sh@861 -- # break 00:12:38.624 10:26:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.624 10:26:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.624 1+0 records in 00:12:38.624 1+0 records out 00:12:38.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300426 s, 13.6 MB/s 00:12:38.624 10:26:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.624 10:26:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.624 10:26:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.624 10:26:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.624 10:26:32 -- common/autotest_common.sh@877 -- # return 0 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.624 10:26:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:38.921 10:26:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:38.921 10:26:32 -- common/autotest_common.sh@857 -- # local i 00:12:38.921 10:26:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.921 10:26:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.921 10:26:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:38.921 10:26:32 -- common/autotest_common.sh@861 -- # break 00:12:38.921 10:26:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.921 10:26:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.921 10:26:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.921 1+0 records in 00:12:38.921 1+0 records out 00:12:38.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343229 s, 11.9 MB/s 00:12:38.921 10:26:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.921 10:26:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.921 10:26:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.921 10:26:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.921 10:26:32 -- common/autotest_common.sh@877 -- # return 0 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.921 10:26:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:39.198 10:26:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:39.198 10:26:32 -- common/autotest_common.sh@857 -- # local i 00:12:39.198 10:26:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.198 10:26:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.198 10:26:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:39.198 10:26:32 -- common/autotest_common.sh@861 -- # break 00:12:39.198 10:26:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.198 10:26:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.198 10:26:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.198 1+0 records in 00:12:39.198 1+0 records out 00:12:39.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00276409 s, 1.5 MB/s 00:12:39.198 10:26:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.198 10:26:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.198 10:26:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.198 10:26:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.198 10:26:32 -- common/autotest_common.sh@877 -- # return 0 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.198 10:26:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:39.457 10:26:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:39.457 10:26:33 -- common/autotest_common.sh@857 -- # local i 00:12:39.457 10:26:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.457 10:26:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.457 10:26:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:39.457 10:26:33 -- common/autotest_common.sh@861 -- # break 00:12:39.457 10:26:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.457 10:26:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.457 10:26:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.457 1+0 records in 00:12:39.457 1+0 records out 00:12:39.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513871 s, 8.0 MB/s 00:12:39.457 10:26:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.457 10:26:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.457 10:26:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.457 10:26:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.457 10:26:33 -- common/autotest_common.sh@877 -- # return 0 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.457 10:26:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:39.716 10:26:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:39.716 10:26:33 -- common/autotest_common.sh@857 -- # local i 00:12:39.716 10:26:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.716 10:26:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.716 10:26:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:39.716 10:26:33 -- common/autotest_common.sh@861 -- # break 00:12:39.716 10:26:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.716 10:26:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.716 10:26:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.716 1+0 records in 00:12:39.716 1+0 records out 00:12:39.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667767 s, 6.1 MB/s 00:12:39.716 10:26:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.716 10:26:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.716 10:26:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.716 10:26:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.716 10:26:33 -- common/autotest_common.sh@877 -- # return 0 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.716 10:26:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:39.974 10:26:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:39.974 10:26:33 -- common/autotest_common.sh@857 -- # local i 00:12:39.974 10:26:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.974 10:26:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.974 10:26:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:39.974 10:26:33 -- common/autotest_common.sh@861 -- # break 00:12:39.974 10:26:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.974 10:26:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.974 10:26:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.974 1+0 records in 00:12:39.974 1+0 records out 00:12:39.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038825 s, 10.5 MB/s 00:12:39.974 10:26:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.974 10:26:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.974 10:26:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.974 10:26:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.974 10:26:33 -- common/autotest_common.sh@877 -- # return 0 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.974 10:26:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:40.232 10:26:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:40.232 10:26:33 -- common/autotest_common.sh@857 -- # local i 00:12:40.232 10:26:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.232 10:26:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.232 10:26:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:40.232 10:26:33 -- common/autotest_common.sh@861 -- # break 00:12:40.232 10:26:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.232 10:26:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.232 10:26:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.232 1+0 records in 00:12:40.232 1+0 records out 00:12:40.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579013 s, 7.1 MB/s 00:12:40.232 10:26:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.232 10:26:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.232 10:26:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.232 10:26:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.232 10:26:33 -- common/autotest_common.sh@877 -- # return 0 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.232 10:26:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:40.490 10:26:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:40.490 10:26:34 -- common/autotest_common.sh@857 -- # local i 00:12:40.490 10:26:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.490 10:26:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.490 10:26:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:40.490 10:26:34 -- common/autotest_common.sh@861 -- # break 00:12:40.490 10:26:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.490 10:26:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.490 10:26:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.490 1+0 records in 00:12:40.490 1+0 records out 00:12:40.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857981 s, 4.8 MB/s 00:12:40.490 10:26:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.490 10:26:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.490 10:26:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.490 10:26:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.490 10:26:34 -- common/autotest_common.sh@877 -- # return 0 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.490 10:26:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:40.748 10:26:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:40.748 10:26:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:40.748 10:26:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:40.748 10:26:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:40.748 10:26:34 -- common/autotest_common.sh@857 -- # local i 00:12:40.748 10:26:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.748 10:26:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.748 10:26:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:40.749 10:26:34 -- common/autotest_common.sh@861 -- # break 00:12:40.749 10:26:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.749 10:26:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.749 10:26:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.749 1+0 records in 00:12:40.749 1+0 records out 00:12:40.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127458 s, 3.2 MB/s 00:12:40.749 10:26:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.749 10:26:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.749 10:26:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.749 10:26:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.749 10:26:34 -- common/autotest_common.sh@877 -- # return 0 00:12:40.749 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.749 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.749 10:26:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:41.007 10:26:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:41.007 10:26:34 -- common/autotest_common.sh@857 -- # local i 00:12:41.007 10:26:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.007 10:26:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.007 10:26:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:41.007 10:26:34 -- common/autotest_common.sh@861 -- # break 00:12:41.007 10:26:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.007 10:26:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.007 10:26:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.007 1+0 records in 00:12:41.007 1+0 records out 00:12:41.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958522 s, 4.3 MB/s 00:12:41.007 10:26:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.007 10:26:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.007 10:26:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.007 10:26:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.007 10:26:34 -- common/autotest_common.sh@877 -- # return 0 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.007 10:26:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:41.266 10:26:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:41.266 10:26:35 -- common/autotest_common.sh@857 -- # local i 00:12:41.266 10:26:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.266 10:26:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.266 10:26:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:41.266 10:26:35 -- common/autotest_common.sh@861 -- # break 00:12:41.266 10:26:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.266 10:26:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.266 10:26:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.266 1+0 records in 00:12:41.266 1+0 records out 00:12:41.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061836 s, 6.6 MB/s 00:12:41.266 10:26:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.266 10:26:35 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.266 10:26:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.266 10:26:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.266 10:26:35 -- common/autotest_common.sh@877 -- # return 0 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.266 10:26:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:41.524 10:26:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:41.524 10:26:35 -- common/autotest_common.sh@857 -- # local i 00:12:41.524 10:26:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.524 10:26:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.524 10:26:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:41.524 10:26:35 -- common/autotest_common.sh@861 -- # break 00:12:41.524 10:26:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.524 10:26:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.524 10:26:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.524 1+0 records in 00:12:41.524 1+0 records out 00:12:41.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992016 s, 4.1 MB/s 00:12:41.524 10:26:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.524 10:26:35 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.524 10:26:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.524 10:26:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.524 10:26:35 -- common/autotest_common.sh@877 -- # return 0 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.524 10:26:35 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:41.783 10:26:35 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd0", 00:12:41.783 "bdev_name": "Malloc0" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd1", 00:12:41.783 "bdev_name": "Malloc1p0" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd2", 00:12:41.783 "bdev_name": "Malloc1p1" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd3", 00:12:41.783 "bdev_name": "Malloc2p0" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd4", 00:12:41.783 "bdev_name": "Malloc2p1" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd5", 00:12:41.783 "bdev_name": "Malloc2p2" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd6", 00:12:41.783 "bdev_name": "Malloc2p3" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd7", 00:12:41.783 "bdev_name": "Malloc2p4" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd8", 00:12:41.783 "bdev_name": "Malloc2p5" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd9", 00:12:41.783 "bdev_name": "Malloc2p6" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd10", 00:12:41.783 "bdev_name": "Malloc2p7" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd11", 00:12:41.783 "bdev_name": "TestPT" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd12", 00:12:41.783 "bdev_name": "raid0" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd13", 00:12:41.783 "bdev_name": "concat0" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd14", 00:12:41.783 "bdev_name": "raid1" 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "nbd_device": "/dev/nbd15", 00:12:41.784 "bdev_name": "AIO0" 00:12:41.784 } 00:12:41.784 ]' 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd0", 00:12:41.784 "bdev_name": "Malloc0" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd1", 00:12:41.784 "bdev_name": "Malloc1p0" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd2", 00:12:41.784 "bdev_name": "Malloc1p1" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd3", 00:12:41.784 "bdev_name": "Malloc2p0" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd4", 00:12:41.784 "bdev_name": "Malloc2p1" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd5", 00:12:41.784 "bdev_name": "Malloc2p2" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd6", 00:12:41.784 "bdev_name": "Malloc2p3" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd7", 00:12:41.784 "bdev_name": "Malloc2p4" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd8", 00:12:41.784 "bdev_name": "Malloc2p5" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd9", 00:12:41.784 "bdev_name": "Malloc2p6" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd10", 00:12:41.784 "bdev_name": "Malloc2p7" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd11", 00:12:41.784 "bdev_name": "TestPT" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd12", 00:12:41.784 "bdev_name": "raid0" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd13", 00:12:41.784 "bdev_name": "concat0" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd14", 00:12:41.784 "bdev_name": "raid1" 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "nbd_device": "/dev/nbd15", 00:12:41.784 "bdev_name": "AIO0" 00:12:41.784 } 00:12:41.784 ]' 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@51 -- # local i 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.784 10:26:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@41 -- # break 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.043 10:26:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@41 -- # break 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.301 10:26:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.560 10:26:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:42.818 10:26:36 -- bdev/nbd_common.sh@41 -- # break 00:12:42.818 10:26:36 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.818 10:26:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.818 10:26:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:43.076 10:26:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:43.076 10:26:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:43.076 10:26:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:43.076 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.076 10:26:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.077 10:26:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:43.077 10:26:36 -- bdev/nbd_common.sh@41 -- # break 00:12:43.077 10:26:36 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.077 10:26:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.077 10:26:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:43.335 10:26:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@41 -- # break 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.335 10:26:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@41 -- # break 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.593 10:26:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@41 -- # break 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.851 10:26:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@41 -- # break 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.109 10:26:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@41 -- # break 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.368 10:26:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:44.627 10:26:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@41 -- # break 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.885 10:26:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@41 -- # break 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.145 10:26:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@41 -- # break 00:12:45.403 10:26:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.404 10:26:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.404 10:26:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:45.662 10:26:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@41 -- # break 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.920 10:26:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:46.178 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@41 -- # break 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.179 10:26:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@41 -- # break 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.437 10:26:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:46.695 10:26:40 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@41 -- # break 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:46.954 10:26:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@65 -- # true 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@65 -- # count=0 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@122 -- # count=0 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@127 -- # return 0 00:12:47.212 10:26:40 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@12 -- # local i 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.212 10:26:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:47.212 /dev/nbd0 00:12:47.212 10:26:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.212 10:26:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.212 10:26:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:47.212 10:26:41 -- common/autotest_common.sh@857 -- # local i 00:12:47.212 10:26:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.212 10:26:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.212 10:26:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:47.212 10:26:41 -- common/autotest_common.sh@861 -- # break 00:12:47.212 10:26:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.212 10:26:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.212 10:26:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.212 1+0 records in 00:12:47.212 1+0 records out 00:12:47.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344104 s, 11.9 MB/s 00:12:47.212 10:26:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.212 10:26:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.212 10:26:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.212 10:26:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.212 10:26:41 -- common/autotest_common.sh@877 -- # return 0 00:12:47.212 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.212 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.212 10:26:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:47.469 /dev/nbd1 00:12:47.469 10:26:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:47.469 10:26:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:47.469 10:26:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:47.469 10:26:41 -- common/autotest_common.sh@857 -- # local i 00:12:47.469 10:26:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.469 10:26:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.469 10:26:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:47.469 10:26:41 -- common/autotest_common.sh@861 -- # break 00:12:47.469 10:26:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.469 10:26:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.469 10:26:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.469 1+0 records in 00:12:47.469 1+0 records out 00:12:47.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055174 s, 7.4 MB/s 00:12:47.469 10:26:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.469 10:26:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.469 10:26:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.469 10:26:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.469 10:26:41 -- common/autotest_common.sh@877 -- # return 0 00:12:47.469 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.469 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.469 10:26:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:48.034 /dev/nbd10 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:48.034 10:26:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:48.034 10:26:41 -- common/autotest_common.sh@857 -- # local i 00:12:48.034 10:26:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:48.034 10:26:41 -- common/autotest_common.sh@861 -- # break 00:12:48.034 10:26:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.034 1+0 records in 00:12:48.034 1+0 records out 00:12:48.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498605 s, 8.2 MB/s 00:12:48.034 10:26:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.034 10:26:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.034 10:26:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.034 10:26:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.034 10:26:41 -- common/autotest_common.sh@877 -- # return 0 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:48.034 /dev/nbd11 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:48.034 10:26:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:48.034 10:26:41 -- common/autotest_common.sh@857 -- # local i 00:12:48.034 10:26:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:48.034 10:26:41 -- common/autotest_common.sh@861 -- # break 00:12:48.034 10:26:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.034 10:26:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.034 1+0 records in 00:12:48.034 1+0 records out 00:12:48.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412112 s, 9.9 MB/s 00:12:48.034 10:26:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.034 10:26:41 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.034 10:26:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.034 10:26:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.034 10:26:41 -- common/autotest_common.sh@877 -- # return 0 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.034 10:26:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:48.293 /dev/nbd12 00:12:48.293 10:26:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:48.293 10:26:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:48.293 10:26:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:48.293 10:26:42 -- common/autotest_common.sh@857 -- # local i 00:12:48.293 10:26:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.293 10:26:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.293 10:26:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:48.293 10:26:42 -- common/autotest_common.sh@861 -- # break 00:12:48.293 10:26:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.293 10:26:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.293 10:26:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.293 1+0 records in 00:12:48.293 1+0 records out 00:12:48.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627727 s, 6.5 MB/s 00:12:48.293 10:26:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.293 10:26:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.293 10:26:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.293 10:26:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.293 10:26:42 -- common/autotest_common.sh@877 -- # return 0 00:12:48.293 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.293 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.293 10:26:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:48.551 /dev/nbd13 00:12:48.551 10:26:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:48.551 10:26:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:48.551 10:26:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:48.551 10:26:42 -- common/autotest_common.sh@857 -- # local i 00:12:48.551 10:26:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.551 10:26:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.551 10:26:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:48.551 10:26:42 -- common/autotest_common.sh@861 -- # break 00:12:48.551 10:26:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.551 10:26:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.551 10:26:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.551 1+0 records in 00:12:48.551 1+0 records out 00:12:48.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613915 s, 6.7 MB/s 00:12:48.551 10:26:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.551 10:26:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.551 10:26:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.551 10:26:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.551 10:26:42 -- common/autotest_common.sh@877 -- # return 0 00:12:48.551 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.551 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.551 10:26:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:48.808 /dev/nbd14 00:12:48.808 10:26:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:48.808 10:26:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:48.808 10:26:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:48.808 10:26:42 -- common/autotest_common.sh@857 -- # local i 00:12:48.808 10:26:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.808 10:26:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.808 10:26:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:48.808 10:26:42 -- common/autotest_common.sh@861 -- # break 00:12:48.808 10:26:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.808 10:26:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.808 10:26:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.808 1+0 records in 00:12:48.808 1+0 records out 00:12:48.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737489 s, 5.6 MB/s 00:12:48.808 10:26:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.808 10:26:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.808 10:26:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.808 10:26:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.808 10:26:42 -- common/autotest_common.sh@877 -- # return 0 00:12:48.808 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.808 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.808 10:26:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:49.066 /dev/nbd15 00:12:49.066 10:26:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:49.066 10:26:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:49.066 10:26:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:49.066 10:26:42 -- common/autotest_common.sh@857 -- # local i 00:12:49.066 10:26:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.066 10:26:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.066 10:26:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:49.066 10:26:42 -- common/autotest_common.sh@861 -- # break 00:12:49.066 10:26:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.066 10:26:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.066 10:26:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.066 1+0 records in 00:12:49.066 1+0 records out 00:12:49.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636976 s, 6.4 MB/s 00:12:49.066 10:26:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.066 10:26:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.066 10:26:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.066 10:26:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.066 10:26:42 -- common/autotest_common.sh@877 -- # return 0 00:12:49.066 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.066 10:26:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.066 10:26:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:49.323 /dev/nbd2 00:12:49.323 10:26:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:49.323 10:26:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:49.323 10:26:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:49.323 10:26:43 -- common/autotest_common.sh@857 -- # local i 00:12:49.323 10:26:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.323 10:26:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.323 10:26:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:49.323 10:26:43 -- common/autotest_common.sh@861 -- # break 00:12:49.323 10:26:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.323 10:26:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.323 10:26:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.323 1+0 records in 00:12:49.323 1+0 records out 00:12:49.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488002 s, 8.4 MB/s 00:12:49.323 10:26:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.323 10:26:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.323 10:26:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.323 10:26:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.323 10:26:43 -- common/autotest_common.sh@877 -- # return 0 00:12:49.323 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.323 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.323 10:26:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:49.579 /dev/nbd3 00:12:49.579 10:26:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:49.579 10:26:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:49.579 10:26:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:49.579 10:26:43 -- common/autotest_common.sh@857 -- # local i 00:12:49.579 10:26:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.579 10:26:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.579 10:26:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:49.579 10:26:43 -- common/autotest_common.sh@861 -- # break 00:12:49.579 10:26:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.579 10:26:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.579 10:26:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.579 1+0 records in 00:12:49.579 1+0 records out 00:12:49.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877228 s, 4.7 MB/s 00:12:49.579 10:26:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.579 10:26:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.579 10:26:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.579 10:26:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.579 10:26:43 -- common/autotest_common.sh@877 -- # return 0 00:12:49.579 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.579 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.579 10:26:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:49.836 /dev/nbd4 00:12:49.836 10:26:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:49.836 10:26:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:49.836 10:26:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:49.836 10:26:43 -- common/autotest_common.sh@857 -- # local i 00:12:49.836 10:26:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.836 10:26:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.836 10:26:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:49.836 10:26:43 -- common/autotest_common.sh@861 -- # break 00:12:49.836 10:26:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.836 10:26:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.836 10:26:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.836 1+0 records in 00:12:49.836 1+0 records out 00:12:49.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490595 s, 8.3 MB/s 00:12:49.836 10:26:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.836 10:26:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.836 10:26:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.836 10:26:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.836 10:26:43 -- common/autotest_common.sh@877 -- # return 0 00:12:49.836 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.836 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.836 10:26:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:50.093 /dev/nbd5 00:12:50.093 10:26:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:50.093 10:26:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:50.093 10:26:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:50.093 10:26:43 -- common/autotest_common.sh@857 -- # local i 00:12:50.093 10:26:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.093 10:26:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.093 10:26:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:50.093 10:26:43 -- common/autotest_common.sh@861 -- # break 00:12:50.093 10:26:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.093 10:26:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.093 10:26:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.093 1+0 records in 00:12:50.093 1+0 records out 00:12:50.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388204 s, 10.6 MB/s 00:12:50.093 10:26:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.093 10:26:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.093 10:26:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.093 10:26:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.093 10:26:43 -- common/autotest_common.sh@877 -- # return 0 00:12:50.093 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.093 10:26:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.093 10:26:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:50.350 /dev/nbd6 00:12:50.350 10:26:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:50.350 10:26:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:50.350 10:26:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:50.350 10:26:44 -- common/autotest_common.sh@857 -- # local i 00:12:50.350 10:26:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.350 10:26:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.350 10:26:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:50.350 10:26:44 -- common/autotest_common.sh@861 -- # break 00:12:50.350 10:26:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.350 10:26:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.350 10:26:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.350 1+0 records in 00:12:50.350 1+0 records out 00:12:50.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890994 s, 4.6 MB/s 00:12:50.350 10:26:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.350 10:26:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.350 10:26:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.350 10:26:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.350 10:26:44 -- common/autotest_common.sh@877 -- # return 0 00:12:50.350 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.350 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.350 10:26:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:50.619 /dev/nbd7 00:12:50.619 10:26:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:50.619 10:26:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:50.619 10:26:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:50.619 10:26:44 -- common/autotest_common.sh@857 -- # local i 00:12:50.619 10:26:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.619 10:26:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.619 10:26:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:50.619 10:26:44 -- common/autotest_common.sh@861 -- # break 00:12:50.619 10:26:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.619 10:26:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.619 10:26:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.619 1+0 records in 00:12:50.619 1+0 records out 00:12:50.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106284 s, 3.9 MB/s 00:12:50.619 10:26:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.619 10:26:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.619 10:26:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.619 10:26:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.619 10:26:44 -- common/autotest_common.sh@877 -- # return 0 00:12:50.619 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.619 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.619 10:26:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:50.924 /dev/nbd8 00:12:50.924 10:26:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:50.924 10:26:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:50.924 10:26:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:50.924 10:26:44 -- common/autotest_common.sh@857 -- # local i 00:12:50.924 10:26:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.924 10:26:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:50.925 10:26:44 -- common/autotest_common.sh@861 -- # break 00:12:50.925 10:26:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.925 1+0 records in 00:12:50.925 1+0 records out 00:12:50.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895638 s, 4.6 MB/s 00:12:50.925 10:26:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.925 10:26:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.925 10:26:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.925 10:26:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.925 10:26:44 -- common/autotest_common.sh@877 -- # return 0 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:50.925 /dev/nbd9 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:50.925 10:26:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:50.925 10:26:44 -- common/autotest_common.sh@857 -- # local i 00:12:50.925 10:26:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:50.925 10:26:44 -- common/autotest_common.sh@861 -- # break 00:12:50.925 10:26:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.925 10:26:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.925 1+0 records in 00:12:50.925 1+0 records out 00:12:50.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102973 s, 4.0 MB/s 00:12:50.925 10:26:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.925 10:26:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.925 10:26:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.925 10:26:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.925 10:26:44 -- common/autotest_common.sh@877 -- # return 0 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.925 10:26:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.198 10:26:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd0", 00:12:51.198 "bdev_name": "Malloc0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd1", 00:12:51.198 "bdev_name": "Malloc1p0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd10", 00:12:51.198 "bdev_name": "Malloc1p1" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd11", 00:12:51.198 "bdev_name": "Malloc2p0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd12", 00:12:51.198 "bdev_name": "Malloc2p1" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd13", 00:12:51.198 "bdev_name": "Malloc2p2" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd14", 00:12:51.198 "bdev_name": "Malloc2p3" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd15", 00:12:51.198 "bdev_name": "Malloc2p4" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd2", 00:12:51.198 "bdev_name": "Malloc2p5" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd3", 00:12:51.198 "bdev_name": "Malloc2p6" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd4", 00:12:51.198 "bdev_name": "Malloc2p7" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd5", 00:12:51.198 "bdev_name": "TestPT" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd6", 00:12:51.198 "bdev_name": "raid0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd7", 00:12:51.198 "bdev_name": "concat0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd8", 00:12:51.198 "bdev_name": "raid1" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd9", 00:12:51.198 "bdev_name": "AIO0" 00:12:51.198 } 00:12:51.198 ]' 00:12:51.198 10:26:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd0", 00:12:51.198 "bdev_name": "Malloc0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd1", 00:12:51.198 "bdev_name": "Malloc1p0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd10", 00:12:51.198 "bdev_name": "Malloc1p1" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd11", 00:12:51.198 "bdev_name": "Malloc2p0" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd12", 00:12:51.198 "bdev_name": "Malloc2p1" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd13", 00:12:51.198 "bdev_name": "Malloc2p2" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd14", 00:12:51.198 "bdev_name": "Malloc2p3" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd15", 00:12:51.198 "bdev_name": "Malloc2p4" 00:12:51.198 }, 00:12:51.198 { 00:12:51.198 "nbd_device": "/dev/nbd2", 00:12:51.199 "bdev_name": "Malloc2p5" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd3", 00:12:51.199 "bdev_name": "Malloc2p6" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd4", 00:12:51.199 "bdev_name": "Malloc2p7" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd5", 00:12:51.199 "bdev_name": "TestPT" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd6", 00:12:51.199 "bdev_name": "raid0" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd7", 00:12:51.199 "bdev_name": "concat0" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd8", 00:12:51.199 "bdev_name": "raid1" 00:12:51.199 }, 00:12:51.199 { 00:12:51.199 "nbd_device": "/dev/nbd9", 00:12:51.199 "bdev_name": "AIO0" 00:12:51.199 } 00:12:51.199 ]' 00:12:51.199 10:26:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.199 10:26:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:51.199 /dev/nbd1 00:12:51.199 /dev/nbd10 00:12:51.199 /dev/nbd11 00:12:51.199 /dev/nbd12 00:12:51.199 /dev/nbd13 00:12:51.199 /dev/nbd14 00:12:51.199 /dev/nbd15 00:12:51.199 /dev/nbd2 00:12:51.199 /dev/nbd3 00:12:51.199 /dev/nbd4 00:12:51.199 /dev/nbd5 00:12:51.199 /dev/nbd6 00:12:51.199 /dev/nbd7 00:12:51.199 /dev/nbd8 00:12:51.199 /dev/nbd9' 00:12:51.199 10:26:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:51.199 /dev/nbd1 00:12:51.199 /dev/nbd10 00:12:51.199 /dev/nbd11 00:12:51.199 /dev/nbd12 00:12:51.199 /dev/nbd13 00:12:51.199 /dev/nbd14 00:12:51.199 /dev/nbd15 00:12:51.199 /dev/nbd2 00:12:51.199 /dev/nbd3 00:12:51.199 /dev/nbd4 00:12:51.199 /dev/nbd5 00:12:51.199 /dev/nbd6 00:12:51.199 /dev/nbd7 00:12:51.199 /dev/nbd8 00:12:51.199 /dev/nbd9' 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@65 -- # count=16 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@95 -- # count=16 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:51.456 256+0 records in 00:12:51.456 256+0 records out 00:12:51.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103665 s, 101 MB/s 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:51.456 256+0 records in 00:12:51.456 256+0 records out 00:12:51.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129828 s, 8.1 MB/s 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.456 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:51.714 256+0 records in 00:12:51.714 256+0 records out 00:12:51.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131556 s, 8.0 MB/s 00:12:51.714 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.714 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:51.714 256+0 records in 00:12:51.714 256+0 records out 00:12:51.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12546 s, 8.4 MB/s 00:12:51.714 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.714 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:51.972 256+0 records in 00:12:51.972 256+0 records out 00:12:51.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136418 s, 7.7 MB/s 00:12:51.972 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.972 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:51.972 256+0 records in 00:12:51.972 256+0 records out 00:12:51.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123918 s, 8.5 MB/s 00:12:51.972 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.972 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:52.230 256+0 records in 00:12:52.230 256+0 records out 00:12:52.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150869 s, 7.0 MB/s 00:12:52.230 10:26:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.230 10:26:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:52.230 256+0 records in 00:12:52.230 256+0 records out 00:12:52.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135053 s, 7.8 MB/s 00:12:52.230 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.230 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:52.487 256+0 records in 00:12:52.487 256+0 records out 00:12:52.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127083 s, 8.3 MB/s 00:12:52.487 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.487 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:52.487 256+0 records in 00:12:52.487 256+0 records out 00:12:52.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131651 s, 8.0 MB/s 00:12:52.487 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.487 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:52.746 256+0 records in 00:12:52.746 256+0 records out 00:12:52.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134847 s, 7.8 MB/s 00:12:52.746 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.746 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:52.746 256+0 records in 00:12:52.746 256+0 records out 00:12:52.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138796 s, 7.6 MB/s 00:12:52.746 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.746 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:53.004 256+0 records in 00:12:53.004 256+0 records out 00:12:53.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137826 s, 7.6 MB/s 00:12:53.004 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.004 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:53.004 256+0 records in 00:12:53.004 256+0 records out 00:12:53.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128415 s, 8.2 MB/s 00:12:53.004 10:26:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.004 10:26:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:53.263 256+0 records in 00:12:53.263 256+0 records out 00:12:53.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145776 s, 7.2 MB/s 00:12:53.263 10:26:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.263 10:26:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:53.521 256+0 records in 00:12:53.521 256+0 records out 00:12:53.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142096 s, 7.4 MB/s 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:53.521 256+0 records in 00:12:53.521 256+0 records out 00:12:53.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222524 s, 4.7 MB/s 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.521 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:53.779 10:26:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@51 -- # local i 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.780 10:26:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@41 -- # break 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.038 10:26:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.296 10:26:48 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@41 -- # break 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.556 10:26:48 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@41 -- # break 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.815 10:26:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@41 -- # break 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.073 10:26:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@41 -- # break 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.332 10:26:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@41 -- # break 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.591 10:26:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:55.849 10:26:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@41 -- # break 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.107 10:26:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@41 -- # break 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.366 10:26:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@41 -- # break 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.624 10:26:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@41 -- # break 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.883 10:26:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@41 -- # break 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.141 10:26:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@41 -- # break 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.399 10:26:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:57.656 10:26:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@41 -- # break 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:57.914 10:26:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@41 -- # break 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.173 10:26:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@41 -- # break 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.431 10:26:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@41 -- # break 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.689 10:26:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@65 -- # true 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@104 -- # count=0 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@109 -- # return 0 00:12:58.948 10:26:52 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:58.948 10:26:52 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:59.206 malloc_lvol_verify 00:12:59.206 10:26:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:59.463 16557b96-7e31-477c-8f61-72ff406f4c42 00:12:59.463 10:26:53 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:59.721 be51a53f-e06c-4a9d-857f-776e8396a13a 00:12:59.721 10:26:53 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:59.978 /dev/nbd0 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:59.978 mke2fs 1.45.5 (07-Jan-2020) 00:12:59.978 00:12:59.978 Filesystem too small for a journal 00:12:59.978 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:59.978 00:12:59.978 Allocating group tables: 0/1 done 00:12:59.978 Writing inode tables: 0/1 done 00:12:59.978 Writing superblocks and filesystem accounting information: 0/1 done 00:12:59.978 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@51 -- # local i 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.978 10:26:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.236 10:26:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@41 -- # break 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:00.236 10:26:54 -- bdev/nbd_common.sh@147 -- # return 0 00:13:00.236 10:26:54 -- bdev/blockdev.sh@324 -- # killprocess 111422 00:13:00.236 10:26:54 -- common/autotest_common.sh@926 -- # '[' -z 111422 ']' 00:13:00.236 10:26:54 -- common/autotest_common.sh@930 -- # kill -0 111422 00:13:00.236 10:26:54 -- common/autotest_common.sh@931 -- # uname 00:13:00.236 10:26:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:00.236 10:26:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111422 00:13:00.236 killing process with pid 111422 00:13:00.236 10:26:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:00.236 10:26:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:00.236 10:26:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111422' 00:13:00.236 10:26:54 -- common/autotest_common.sh@945 -- # kill 111422 00:13:00.236 10:26:54 -- common/autotest_common.sh@950 -- # wait 111422 00:13:02.140 ************************************ 00:13:02.140 END TEST bdev_nbd 00:13:02.140 ************************************ 00:13:02.140 10:26:55 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:02.140 00:13:02.140 real 0m26.167s 00:13:02.140 user 0m34.604s 00:13:02.140 sys 0m8.738s 00:13:02.140 10:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.140 10:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:02.140 10:26:55 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:02.140 10:26:55 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:02.140 10:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:02.140 ************************************ 00:13:02.140 START TEST bdev_fio 00:13:02.140 ************************************ 00:13:02.140 10:26:55 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@329 -- # local env_context 00:13:02.140 10:26:55 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:02.140 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:02.140 10:26:55 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:02.140 10:26:55 -- bdev/blockdev.sh@337 -- # echo '' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:02.140 10:26:55 -- bdev/blockdev.sh@337 -- # env_context= 00:13:02.140 10:26:55 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:02.140 10:26:55 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:02.140 10:26:55 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:02.140 10:26:55 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:02.140 10:26:55 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:02.140 10:26:55 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:02.140 10:26:55 -- common/autotest_common.sh@1280 -- # cat 00:13:02.140 10:26:55 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1293 -- # cat 00:13:02.140 10:26:55 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:02.140 10:26:55 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:02.140 10:26:55 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:02.140 10:26:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:02.140 10:26:55 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:02.140 10:26:55 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:02.140 10:26:55 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:02.140 10:26:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:02.140 10:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:02.140 10:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:02.140 ************************************ 00:13:02.140 START TEST bdev_fio_rw_verify 00:13:02.140 ************************************ 00:13:02.141 10:26:55 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:02.141 10:26:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:02.141 10:26:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:02.141 10:26:55 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:02.141 10:26:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:02.141 10:26:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.141 10:26:55 -- common/autotest_common.sh@1320 -- # shift 00:13:02.141 10:26:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:02.141 10:26:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:02.141 10:26:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.141 10:26:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:02.141 10:26:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:02.141 10:26:55 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:02.141 10:26:55 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:02.141 10:26:55 -- common/autotest_common.sh@1326 -- # break 00:13:02.141 10:26:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:02.141 10:26:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:02.400 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:02.400 fio-3.35 00:13:02.400 Starting 16 threads 00:13:14.604 00:13:14.604 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=112697: Fri Jul 12 10:27:07 2024 00:13:14.604 read: IOPS=74.5k, BW=291MiB/s (305MB/s)(2911MiB/10005msec) 00:13:14.605 slat (usec): min=2, max=32052, avg=39.09, stdev=449.21 00:13:14.605 clat (usec): min=11, max=36326, avg=313.18, stdev=1292.66 00:13:14.605 lat (usec): min=29, max=36348, avg=352.27, stdev=1368.00 00:13:14.605 clat percentiles (usec): 00:13:14.605 | 50.000th=[ 184], 99.000th=[ 1156], 99.900th=[16319], 99.990th=[24249], 00:13:14.605 | 99.999th=[35914] 00:13:14.605 write: IOPS=120k, BW=467MiB/s (490MB/s)(4636MiB/9928msec); 0 zone resets 00:13:14.605 slat (usec): min=6, max=46824, avg=64.94, stdev=599.32 00:13:14.605 clat (usec): min=11, max=56263, avg=391.51, stdev=1467.68 00:13:14.605 lat (usec): min=39, max=56298, avg=456.45, stdev=1584.48 00:13:14.605 clat percentiles (usec): 00:13:14.605 | 50.000th=[ 233], 99.000th=[ 5145], 99.900th=[16450], 99.990th=[28181], 00:13:14.605 | 99.999th=[44303] 00:13:14.605 bw ( KiB/s): min=295024, max=764304, per=99.16%, avg=474195.00, stdev=8158.48, samples=304 00:13:14.605 iops : min=73756, max=191076, avg=118548.84, stdev=2039.61, samples=304 00:13:14.605 lat (usec) : 20=0.01%, 50=0.45%, 100=9.60%, 250=54.38%, 500=31.13% 00:13:14.605 lat (usec) : 750=2.35%, 1000=0.52% 00:13:14.605 lat (msec) : 2=0.56%, 4=0.06%, 10=0.26%, 20=0.65%, 50=0.05% 00:13:14.605 lat (msec) : 100=0.01% 00:13:14.605 cpu : usr=58.15%, sys=1.90%, ctx=222920, majf=0, minf=82320 00:13:14.605 IO depths : 1=11.4%, 2=23.7%, 4=51.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:14.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.605 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.605 issued rwts: total=745290,1186875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.605 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:14.605 00:13:14.605 Run status group 0 (all jobs): 00:13:14.605 READ: bw=291MiB/s (305MB/s), 291MiB/s-291MiB/s (305MB/s-305MB/s), io=2911MiB (3053MB), run=10005-10005msec 00:13:14.605 WRITE: bw=467MiB/s (490MB/s), 467MiB/s-467MiB/s (490MB/s-490MB/s), io=4636MiB (4861MB), run=9928-9928msec 00:13:15.984 ----------------------------------------------------- 00:13:15.984 Suppressions used: 00:13:15.984 count bytes template 00:13:15.984 16 140 /usr/src/fio/parse.c 00:13:15.984 9556 917376 /usr/src/fio/iolog.c 00:13:15.984 2 596 libcrypto.so 00:13:15.984 ----------------------------------------------------- 00:13:15.984 00:13:15.984 00:13:15.984 real 0m13.795s 00:13:15.984 user 1m37.884s 00:13:15.984 sys 0m3.987s 00:13:15.984 10:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.984 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.984 ************************************ 00:13:15.984 END TEST bdev_fio_rw_verify 00:13:15.984 ************************************ 00:13:15.984 10:27:09 -- bdev/blockdev.sh@348 -- # rm -f 00:13:15.984 10:27:09 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.984 10:27:09 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.984 10:27:09 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:15.984 10:27:09 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:15.984 10:27:09 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:15.984 10:27:09 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:15.984 10:27:09 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.984 10:27:09 -- common/autotest_common.sh@1280 -- # cat 00:13:15.984 10:27:09 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:15.984 10:27:09 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:15.984 10:27:09 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.985 10:27:09 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ef05c313-e0f1-46ff-82a9-975adafd3581"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef05c313-e0f1-46ff-82a9-975adafd3581",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4ece179e-ca4a-5951-b8ba-d332b297cbee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4ece179e-ca4a-5951-b8ba-d332b297cbee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f5e89f13-0bf1-5921-b161-f069dd9f7943"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f5e89f13-0bf1-5921-b161-f069dd9f7943",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b333a681-153d-50e7-a2a5-c2f6018eec6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b333a681-153d-50e7-a2a5-c2f6018eec6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a67bd11-f11a-5df3-87e0-103f10bb8d7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a67bd11-f11a-5df3-87e0-103f10bb8d7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1f65182c-9863-5b77-806c-1d963eb76f42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1f65182c-9863-5b77-806c-1d963eb76f42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "e64bc099-6eab-5211-9d09-fe10b06fa1d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e64bc099-6eab-5211-9d09-fe10b06fa1d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d183281f-6a14-538c-ae56-c9f0a394e340"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d183281f-6a14-538c-ae56-c9f0a394e340",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0214e2cc-0b39-5760-a47a-b771dcb6bafd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0214e2cc-0b39-5760-a47a-b771dcb6bafd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06e8e03f-01a7-50f9-8f3a-89f1558d94ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06e8e03f-01a7-50f9-8f3a-89f1558d94ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9a7b985e-4f46-5d62-a341-e2140cfa95bf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a7b985e-4f46-5d62-a341-e2140cfa95bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3b84cf8-134a-4310-85d8-e866ea9525dd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6841f909-cd63-4484-bc50-f1f80b539a5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "6c2393f1-fb7f-4083-b667-9a99ca25ac7c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "773e36b1-c7dc-40c4-8a5c-d5d187339045"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f10a349f-a00e-4df4-bd23-cbe391110f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "62cf98b6-68da-411c-a4c3-081f13825df4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "36ae2617-d3b3-49c4-bf12-51c18accd4ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e8e7d7cf-a624-47f7-9657-509450fdd508",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ce052616-d99a-4313-8e43-898fdacf7d12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "545b2484-244f-4316-af9f-36f6d10604be"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "545b2484-244f-4316-af9f-36f6d10604be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:15.985 10:27:09 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:15.985 Malloc1p0 00:13:15.985 Malloc1p1 00:13:15.985 Malloc2p0 00:13:15.985 Malloc2p1 00:13:15.985 Malloc2p2 00:13:15.985 Malloc2p3 00:13:15.985 Malloc2p4 00:13:15.985 Malloc2p5 00:13:15.985 Malloc2p6 00:13:15.985 Malloc2p7 00:13:15.985 TestPT 00:13:15.985 raid0 00:13:15.985 concat0 ]] 00:13:15.985 10:27:09 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ef05c313-e0f1-46ff-82a9-975adafd3581"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef05c313-e0f1-46ff-82a9-975adafd3581",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4ece179e-ca4a-5951-b8ba-d332b297cbee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4ece179e-ca4a-5951-b8ba-d332b297cbee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f5e89f13-0bf1-5921-b161-f069dd9f7943"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f5e89f13-0bf1-5921-b161-f069dd9f7943",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b333a681-153d-50e7-a2a5-c2f6018eec6b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b333a681-153d-50e7-a2a5-c2f6018eec6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a67bd11-f11a-5df3-87e0-103f10bb8d7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a67bd11-f11a-5df3-87e0-103f10bb8d7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1f65182c-9863-5b77-806c-1d963eb76f42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1f65182c-9863-5b77-806c-1d963eb76f42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "e64bc099-6eab-5211-9d09-fe10b06fa1d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e64bc099-6eab-5211-9d09-fe10b06fa1d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d183281f-6a14-538c-ae56-c9f0a394e340"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d183281f-6a14-538c-ae56-c9f0a394e340",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d2b6b6c-05cc-50f0-b9ea-96d90a40eaae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0214e2cc-0b39-5760-a47a-b771dcb6bafd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0214e2cc-0b39-5760-a47a-b771dcb6bafd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "06e8e03f-01a7-50f9-8f3a-89f1558d94ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06e8e03f-01a7-50f9-8f3a-89f1558d94ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9a7b985e-4f46-5d62-a341-e2140cfa95bf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a7b985e-4f46-5d62-a341-e2140cfa95bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c3b84cf8-134a-4310-85d8-e866ea9525dd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c3b84cf8-134a-4310-85d8-e866ea9525dd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6841f909-cd63-4484-bc50-f1f80b539a5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "6c2393f1-fb7f-4083-b667-9a99ca25ac7c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "773e36b1-c7dc-40c4-8a5c-d5d187339045"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "773e36b1-c7dc-40c4-8a5c-d5d187339045",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f10a349f-a00e-4df4-bd23-cbe391110f11",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "62cf98b6-68da-411c-a4c3-081f13825df4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "36ae2617-d3b3-49c4-bf12-51c18accd4ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "36ae2617-d3b3-49c4-bf12-51c18accd4ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e8e7d7cf-a624-47f7-9657-509450fdd508",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ce052616-d99a-4313-8e43-898fdacf7d12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "545b2484-244f-4316-af9f-36f6d10604be"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "545b2484-244f-4316-af9f-36f6d10604be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:15.986 10:27:09 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.986 10:27:09 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:15.986 10:27:09 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:15.986 10:27:09 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.986 10:27:09 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:15.986 10:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.986 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.986 ************************************ 00:13:15.986 START TEST bdev_fio_trim 00:13:15.986 ************************************ 00:13:15.987 10:27:09 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.987 10:27:09 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.987 10:27:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:15.987 10:27:09 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:15.987 10:27:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:15.987 10:27:09 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.987 10:27:09 -- common/autotest_common.sh@1320 -- # shift 00:13:15.987 10:27:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:15.987 10:27:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:15.987 10:27:09 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.987 10:27:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:15.987 10:27:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:15.987 10:27:09 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:15.987 10:27:09 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:15.987 10:27:09 -- common/autotest_common.sh@1326 -- # break 00:13:15.987 10:27:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:15.987 10:27:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.245 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.245 fio-3.35 00:13:16.245 Starting 14 threads 00:13:28.448 00:13:28.448 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=112933: Fri Jul 12 10:27:21 2024 00:13:28.448 write: IOPS=127k, BW=497MiB/s (521MB/s)(4978MiB/10014msec); 0 zone resets 00:13:28.448 slat (nsec): min=1919, max=34502k, avg=40069.41, stdev=403466.61 00:13:28.448 clat (usec): min=17, max=34039, avg=273.04, stdev=1070.51 00:13:28.448 lat (usec): min=22, max=34707, avg=313.11, stdev=1143.11 00:13:28.448 clat percentiles (usec): 00:13:28.448 | 50.000th=[ 186], 99.000th=[ 457], 99.900th=[16188], 99.990th=[20055], 00:13:28.448 | 99.999th=[28181] 00:13:28.448 bw ( KiB/s): min=380392, max=714736, per=100.00%, avg=509845.76, stdev=8567.55, samples=267 00:13:28.448 iops : min=95098, max=178684, avg=127461.15, stdev=2141.89, samples=267 00:13:28.448 trim: IOPS=127k, BW=497MiB/s (521MB/s)(4978MiB/10014msec); 0 zone resets 00:13:28.448 slat (usec): min=3, max=28081, avg=27.37, stdev=341.66 00:13:28.448 clat (usec): min=3, max=34708, avg=298.99, stdev=1116.80 00:13:28.448 lat (usec): min=11, max=34725, avg=326.36, stdev=1167.51 00:13:28.448 clat percentiles (usec): 00:13:28.448 | 50.000th=[ 208], 99.000th=[ 486], 99.900th=[16319], 99.990th=[18220], 00:13:28.448 | 99.999th=[27657] 00:13:28.448 bw ( KiB/s): min=380392, max=714736, per=100.00%, avg=509844.92, stdev=8566.94, samples=267 00:13:28.448 iops : min=95098, max=178684, avg=127461.15, stdev=2141.75, samples=267 00:13:28.448 lat (usec) : 4=0.01%, 10=0.05%, 20=0.24%, 50=1.16%, 100=6.89% 00:13:28.448 lat (usec) : 250=64.14%, 500=26.70%, 750=0.17%, 1000=0.02% 00:13:28.448 lat (msec) : 2=0.01%, 4=0.01%, 10=0.14%, 20=0.47%, 50=0.01% 00:13:28.448 cpu : usr=68.96%, sys=0.41%, ctx=167333, majf=0, minf=770 00:13:28.448 IO depths : 1=12.3%, 2=24.5%, 4=50.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.448 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.448 issued rwts: total=0,1274477,1274482,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.448 00:13:28.448 Run status group 0 (all jobs): 00:13:28.448 WRITE: bw=497MiB/s (521MB/s), 497MiB/s-497MiB/s (521MB/s-521MB/s), io=4978MiB (5220MB), run=10014-10014msec 00:13:28.448 TRIM: bw=497MiB/s (521MB/s), 497MiB/s-497MiB/s (521MB/s-521MB/s), io=4978MiB (5220MB), run=10014-10014msec 00:13:29.823 ----------------------------------------------------- 00:13:29.823 Suppressions used: 00:13:29.823 count bytes template 00:13:29.823 14 129 /usr/src/fio/parse.c 00:13:29.823 2 596 libcrypto.so 00:13:29.823 ----------------------------------------------------- 00:13:29.823 00:13:29.823 ************************************ 00:13:29.823 END TEST bdev_fio_trim 00:13:29.823 ************************************ 00:13:29.823 00:13:29.823 real 0m13.563s 00:13:29.823 user 1m41.168s 00:13:29.823 sys 0m1.461s 00:13:29.823 10:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.823 10:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.823 10:27:23 -- bdev/blockdev.sh@366 -- # rm -f 00:13:29.823 10:27:23 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:29.823 /home/vagrant/spdk_repo/spdk 00:13:29.823 ************************************ 00:13:29.823 END TEST bdev_fio 00:13:29.823 ************************************ 00:13:29.823 10:27:23 -- bdev/blockdev.sh@368 -- # popd 00:13:29.823 10:27:23 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:29.823 00:13:29.823 real 0m27.664s 00:13:29.823 user 3m19.250s 00:13:29.823 sys 0m5.537s 00:13:29.823 10:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.823 10:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.823 10:27:23 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:29.823 10:27:23 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.823 10:27:23 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:29.823 10:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.823 10:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.823 ************************************ 00:13:29.823 START TEST bdev_verify 00:13:29.823 ************************************ 00:13:29.823 10:27:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.823 [2024-07-12 10:27:23.572635] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:29.823 [2024-07-12 10:27:23.573053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113137 ] 00:13:29.823 [2024-07-12 10:27:23.742486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.082 [2024-07-12 10:27:23.930350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.082 [2024-07-12 10:27:23.930361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.649 [2024-07-12 10:27:24.302546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.649 [2024-07-12 10:27:24.302817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.649 [2024-07-12 10:27:24.310499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.649 [2024-07-12 10:27:24.310732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.649 [2024-07-12 10:27:24.318529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:30.649 [2024-07-12 10:27:24.318679] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:30.649 [2024-07-12 10:27:24.318814] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:30.649 [2024-07-12 10:27:24.504569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:30.649 [2024-07-12 10:27:24.504972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.649 [2024-07-12 10:27:24.505066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:30.649 [2024-07-12 10:27:24.505298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.649 [2024-07-12 10:27:24.508016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.649 [2024-07-12 10:27:24.508213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:31.216 Running I/O for 5 seconds... 00:13:36.527 00:13:36.527 Latency(us) 00:13:36.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.527 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x1000 00:13:36.527 Malloc0 : 5.25 1305.58 5.10 0.00 0.00 97430.93 2651.23 270723.26 00:13:36.527 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x1000 length 0x1000 00:13:36.527 Malloc0 : 5.20 1245.22 4.86 0.00 0.00 101780.81 2934.23 285975.27 00:13:36.527 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x800 00:13:36.527 Malloc1p0 : 5.25 918.29 3.59 0.00 0.00 138345.42 5004.57 163005.91 00:13:36.527 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x800 length 0x800 00:13:36.527 Malloc1p0 : 5.21 876.66 3.42 0.00 0.00 144555.13 4647.10 171585.16 00:13:36.527 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x800 00:13:36.527 Malloc1p1 : 5.25 918.08 3.59 0.00 0.00 138127.97 4736.47 157286.40 00:13:36.527 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x800 length 0x800 00:13:36.527 Malloc1p1 : 5.21 876.46 3.42 0.00 0.00 144301.60 4676.89 166818.91 00:13:36.527 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p0 : 5.25 917.87 3.59 0.00 0.00 137885.86 4617.31 153473.40 00:13:36.527 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p0 : 5.21 876.23 3.42 0.00 0.00 144057.15 4855.62 162052.65 00:13:36.527 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p1 : 5.25 917.64 3.58 0.00 0.00 137690.40 4796.04 148707.14 00:13:36.527 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p1 : 5.21 875.90 3.42 0.00 0.00 143819.99 4647.10 158239.65 00:13:36.527 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p2 : 5.26 917.42 3.58 0.00 0.00 137496.34 4617.31 143940.89 00:13:36.527 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p2 : 5.21 875.62 3.42 0.00 0.00 143582.98 4587.52 153473.40 00:13:36.527 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p3 : 5.26 917.19 3.58 0.00 0.00 137280.65 5242.88 138221.38 00:13:36.527 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p3 : 5.21 875.32 3.42 0.00 0.00 143359.83 5004.57 148707.14 00:13:36.527 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p4 : 5.26 916.96 3.58 0.00 0.00 137031.67 4974.78 133455.13 00:13:36.527 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p4 : 5.22 875.06 3.42 0.00 0.00 143103.39 4944.99 144894.14 00:13:36.527 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p5 : 5.26 916.73 3.58 0.00 0.00 136802.05 5242.88 127735.62 00:13:36.527 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p5 : 5.22 874.81 3.42 0.00 0.00 142864.51 4766.25 140127.88 00:13:36.527 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p6 : 5.26 916.49 3.58 0.00 0.00 136584.56 5123.72 122969.37 00:13:36.527 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p6 : 5.24 886.24 3.46 0.00 0.00 141297.12 5272.67 135361.63 00:13:36.527 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x200 00:13:36.527 Malloc2p7 : 5.26 916.27 3.58 0.00 0.00 136337.53 4736.47 118679.74 00:13:36.527 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x200 length 0x200 00:13:36.527 Malloc2p7 : 5.24 886.02 3.46 0.00 0.00 141057.49 5183.30 129642.12 00:13:36.527 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x1000 00:13:36.527 TestPT : 5.26 901.62 3.52 0.00 0.00 138235.28 12213.53 119632.99 00:13:36.527 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x1000 length 0x1000 00:13:36.527 TestPT : 5.24 871.46 3.40 0.00 0.00 143070.09 14477.50 130595.37 00:13:36.527 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x2000 00:13:36.527 raid0 : 5.27 915.84 3.58 0.00 0.00 135717.08 4527.94 114390.11 00:13:36.527 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x2000 length 0x2000 00:13:36.527 raid0 : 5.24 885.57 3.46 0.00 0.00 140408.99 5123.72 114390.11 00:13:36.527 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x2000 00:13:36.527 concat0 : 5.27 915.62 3.58 0.00 0.00 135458.48 4736.47 115819.99 00:13:36.527 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x2000 length 0x2000 00:13:36.527 concat0 : 5.24 885.36 3.46 0.00 0.00 140126.70 5183.30 110100.48 00:13:36.527 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x0 length 0x1000 00:13:36.527 raid1 : 5.27 915.39 3.58 0.00 0.00 135187.81 5928.03 116296.61 00:13:36.527 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.527 Verification LBA range: start 0x1000 length 0x1000 00:13:36.527 raid1 : 5.24 885.14 3.46 0.00 0.00 139876.04 5868.45 106764.10 00:13:36.527 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.528 Verification LBA range: start 0x0 length 0x4e2 00:13:36.528 AIO0 : 5.27 915.02 3.57 0.00 0.00 134909.61 4408.79 116773.24 00:13:36.528 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.528 Verification LBA range: start 0x4e2 length 0x4e2 00:13:36.528 AIO0 : 5.25 884.79 3.46 0.00 0.00 139584.41 5570.56 106764.10 00:13:36.528 =================================================================================================================== 00:13:36.528 Total : 29477.86 115.15 0.00 0.00 136082.70 2651.23 285975.27 00:13:38.432 ************************************ 00:13:38.432 END TEST bdev_verify 00:13:38.432 ************************************ 00:13:38.432 00:13:38.432 real 0m8.735s 00:13:38.432 user 0m15.619s 00:13:38.432 sys 0m0.777s 00:13:38.432 10:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.432 10:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:38.432 10:27:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:38.432 10:27:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:38.432 10:27:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.432 10:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:38.432 ************************************ 00:13:38.432 START TEST bdev_verify_big_io 00:13:38.432 ************************************ 00:13:38.432 10:27:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:38.432 [2024-07-12 10:27:32.342199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:38.432 [2024-07-12 10:27:32.342593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113281 ] 00:13:38.691 [2024-07-12 10:27:32.510276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.950 [2024-07-12 10:27:32.705550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.950 [2024-07-12 10:27:32.705565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.519 [2024-07-12 10:27:33.130273] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.519 [2024-07-12 10:27:33.130536] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.519 [2024-07-12 10:27:33.138189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.519 [2024-07-12 10:27:33.138437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.519 [2024-07-12 10:27:33.146261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:39.519 [2024-07-12 10:27:33.146452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:39.519 [2024-07-12 10:27:33.146593] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:39.519 [2024-07-12 10:27:33.374044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:39.520 [2024-07-12 10:27:33.374499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.520 [2024-07-12 10:27:33.374792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:39.520 [2024-07-12 10:27:33.374974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.520 [2024-07-12 10:27:33.378387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.520 [2024-07-12 10:27:33.378628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:40.084 [2024-07-12 10:27:33.708197] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:40.084 [2024-07-12 10:27:33.711267] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:40.084 [2024-07-12 10:27:33.714872] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:40.084 [2024-07-12 10:27:33.718303] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:40.084 [2024-07-12 10:27:33.721327] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:40.084 [2024-07-12 10:27:33.724766] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.727734] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.731227] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.734283] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.737775] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.740744] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.744281] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.747168] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.750642] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.754384] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.757337] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:40.085 [2024-07-12 10:27:33.825674] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:40.085 [2024-07-12 10:27:33.831931] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:40.085 Running I/O for 5 seconds... 00:13:46.645 00:13:46.645 Latency(us) 00:13:46.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.645 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x100 00:13:46.645 Malloc0 : 5.48 363.92 22.75 0.00 0.00 340689.05 19184.17 1067641.02 00:13:46.645 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x100 length 0x100 00:13:46.645 Malloc0 : 5.59 355.40 22.21 0.00 0.00 352471.75 18707.55 1136275.08 00:13:46.645 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x80 00:13:46.645 Malloc1p0 : 5.48 278.41 17.40 0.00 0.00 442145.54 37653.41 964689.92 00:13:46.645 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x80 length 0x80 00:13:46.645 Malloc1p0 : 5.73 208.41 13.03 0.00 0.00 586442.35 41466.41 1044763.00 00:13:46.645 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x80 00:13:46.645 Malloc1p1 : 5.73 132.18 8.26 0.00 0.00 915192.56 39798.23 1944631.85 00:13:46.645 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x80 length 0x80 00:13:46.645 Malloc1p1 : 5.84 123.94 7.75 0.00 0.00 969065.79 43134.60 2013265.92 00:13:46.645 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p0 : 5.57 73.21 4.58 0.00 0.00 412199.76 7596.22 621519.59 00:13:46.645 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p0 : 5.60 69.68 4.35 0.00 0.00 431695.10 7387.69 652023.62 00:13:46.645 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p1 : 5.57 73.20 4.58 0.00 0.00 410427.60 7179.17 606267.58 00:13:46.645 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p1 : 5.60 69.66 4.35 0.00 0.00 429603.76 7536.64 632958.60 00:13:46.645 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p2 : 5.57 73.19 4.57 0.00 0.00 408729.18 6940.86 591015.56 00:13:46.645 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p2 : 5.60 69.65 4.35 0.00 0.00 427620.11 7626.01 613893.59 00:13:46.645 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p3 : 5.58 73.17 4.57 0.00 0.00 406966.88 8400.52 575763.55 00:13:46.645 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p3 : 5.60 69.63 4.35 0.00 0.00 425652.04 8102.63 594828.57 00:13:46.645 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p4 : 5.58 73.16 4.57 0.00 0.00 405254.30 7685.59 560511.53 00:13:46.645 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p4 : 5.60 69.62 4.35 0.00 0.00 423779.76 8043.05 579576.55 00:13:46.645 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p5 : 5.58 73.15 4.57 0.00 0.00 403461.11 7983.48 545259.52 00:13:46.645 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p5 : 5.60 69.61 4.35 0.00 0.00 421846.07 7983.48 564324.54 00:13:46.645 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p6 : 5.58 73.13 4.57 0.00 0.00 401785.91 8221.79 530007.51 00:13:46.645 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p6 : 5.67 71.93 4.50 0.00 0.00 407437.66 8162.21 552885.53 00:13:46.645 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x20 00:13:46.645 Malloc2p7 : 5.61 76.43 4.78 0.00 0.00 385637.56 7328.12 514755.49 00:13:46.645 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x20 length 0x20 00:13:46.645 Malloc2p7 : 5.67 71.91 4.49 0.00 0.00 405645.94 7983.48 537633.51 00:13:46.645 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x100 00:13:46.645 TestPT : 5.77 132.16 8.26 0.00 0.00 873603.79 52190.49 1967509.88 00:13:46.645 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x100 length 0x100 00:13:46.645 TestPT : 5.85 123.78 7.74 0.00 0.00 926848.23 63867.81 2043769.95 00:13:46.645 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.645 Verification LBA range: start 0x0 length 0x200 00:13:46.645 raid0 : 5.77 138.03 8.63 0.00 0.00 828612.75 40036.54 1937005.85 00:13:46.646 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x200 length 0x200 00:13:46.646 raid0 : 5.85 129.57 8.10 0.00 0.00 874436.52 43134.60 1982761.89 00:13:46.646 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x0 length 0x200 00:13:46.646 concat0 : 5.80 142.92 8.93 0.00 0.00 787183.45 19303.33 1929379.84 00:13:46.646 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x200 length 0x200 00:13:46.646 concat0 : 5.81 144.09 9.01 0.00 0.00 782506.82 41466.41 1982761.89 00:13:46.646 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x0 length 0x100 00:13:46.646 raid1 : 5.77 159.82 9.99 0.00 0.00 700491.57 19065.02 1929379.84 00:13:46.646 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x100 length 0x100 00:13:46.646 raid1 : 5.83 171.16 10.70 0.00 0.00 649346.96 17992.61 1998013.91 00:13:46.646 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x0 length 0x4e 00:13:46.646 AIO0 : 5.80 165.82 10.36 0.00 0.00 407024.13 1541.59 1121023.07 00:13:46.646 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:46.646 Verification LBA range: start 0x4e length 0x4e 00:13:46.646 AIO0 : 5.85 149.21 9.33 0.00 0.00 446549.55 3247.01 1166779.11 00:13:46.646 =================================================================================================================== 00:13:46.646 Total : 4069.16 254.32 0.00 0.00 556625.22 1541.59 2043769.95 00:13:48.019 ************************************ 00:13:48.019 END TEST bdev_verify_big_io 00:13:48.019 ************************************ 00:13:48.019 00:13:48.019 real 0m9.370s 00:13:48.019 user 0m17.165s 00:13:48.019 sys 0m0.597s 00:13:48.019 10:27:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.019 10:27:41 -- common/autotest_common.sh@10 -- # set +x 00:13:48.019 10:27:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.019 10:27:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:48.019 10:27:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.019 10:27:41 -- common/autotest_common.sh@10 -- # set +x 00:13:48.019 ************************************ 00:13:48.019 START TEST bdev_write_zeroes 00:13:48.019 ************************************ 00:13:48.019 10:27:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.019 [2024-07-12 10:27:41.762794] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:48.019 [2024-07-12 10:27:41.763131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113427 ] 00:13:48.019 [2024-07-12 10:27:41.928626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.277 [2024-07-12 10:27:42.099721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.535 [2024-07-12 10:27:42.429784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.535 [2024-07-12 10:27:42.430156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.535 [2024-07-12 10:27:42.437757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.535 [2024-07-12 10:27:42.437962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.535 [2024-07-12 10:27:42.445783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.535 [2024-07-12 10:27:42.445959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:48.535 [2024-07-12 10:27:42.446084] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:48.793 [2024-07-12 10:27:42.622805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.794 [2024-07-12 10:27:42.623271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.794 [2024-07-12 10:27:42.623409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:48.794 [2024-07-12 10:27:42.623644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.794 [2024-07-12 10:27:42.626060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.794 [2024-07-12 10:27:42.626238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:49.052 Running I/O for 1 seconds... 00:13:50.435 00:13:50.435 Latency(us) 00:13:50.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.435 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc0 : 1.03 5823.92 22.75 0.00 0.00 21966.99 633.02 37415.10 00:13:50.435 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc1p0 : 1.03 5817.09 22.72 0.00 0.00 21955.75 804.31 36700.16 00:13:50.435 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc1p1 : 1.04 5811.17 22.70 0.00 0.00 21938.65 744.73 35985.22 00:13:50.435 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p0 : 1.04 5805.32 22.68 0.00 0.00 21924.06 785.69 35508.60 00:13:50.435 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p1 : 1.04 5799.48 22.65 0.00 0.00 21903.49 748.45 34793.66 00:13:50.435 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p2 : 1.04 5792.75 22.63 0.00 0.00 21895.09 744.73 34078.72 00:13:50.435 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p3 : 1.04 5786.89 22.61 0.00 0.00 21885.82 752.17 33363.78 00:13:50.435 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p4 : 1.04 5781.09 22.58 0.00 0.00 21866.16 744.73 32887.16 00:13:50.435 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p5 : 1.04 5775.05 22.56 0.00 0.00 21856.29 759.62 32172.22 00:13:50.435 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p6 : 1.04 5769.08 22.54 0.00 0.00 21838.51 741.00 31695.59 00:13:50.435 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 Malloc2p7 : 1.04 5763.29 22.51 0.00 0.00 21822.69 848.99 30980.65 00:13:50.435 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 TestPT : 1.04 5757.45 22.49 0.00 0.00 21803.08 793.13 30146.56 00:13:50.435 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.435 raid0 : 1.05 5750.64 22.46 0.00 0.00 21780.56 1333.06 28835.84 00:13:50.436 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.436 concat0 : 1.05 5744.07 22.44 0.00 0.00 21743.10 1310.72 28001.75 00:13:50.436 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.436 raid1 : 1.06 5822.73 22.75 0.00 0.00 21376.68 2025.66 26095.24 00:13:50.436 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.436 AIO0 : 1.06 5812.91 22.71 0.00 0.00 21315.55 2010.76 24069.59 00:13:50.436 =================================================================================================================== 00:13:50.436 Total : 92612.92 361.77 0.00 0.00 21803.34 633.02 37415.10 00:13:51.812 ************************************ 00:13:51.812 END TEST bdev_write_zeroes 00:13:51.812 ************************************ 00:13:51.812 00:13:51.812 real 0m4.005s 00:13:51.812 user 0m3.348s 00:13:51.812 sys 0m0.461s 00:13:51.812 10:27:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.812 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:13:52.071 10:27:45 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.071 10:27:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:52.071 10:27:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.071 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:13:52.071 ************************************ 00:13:52.071 START TEST bdev_json_nonenclosed 00:13:52.071 ************************************ 00:13:52.071 10:27:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.071 [2024-07-12 10:27:45.827762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:52.071 [2024-07-12 10:27:45.828201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113501 ] 00:13:52.330 [2024-07-12 10:27:45.997015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.330 [2024-07-12 10:27:46.158959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.330 [2024-07-12 10:27:46.159478] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:52.330 [2024-07-12 10:27:46.159645] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.589 ************************************ 00:13:52.589 END TEST bdev_json_nonenclosed 00:13:52.589 ************************************ 00:13:52.589 00:13:52.589 real 0m0.704s 00:13:52.589 user 0m0.455s 00:13:52.589 sys 0m0.148s 00:13:52.589 10:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.589 10:27:46 -- common/autotest_common.sh@10 -- # set +x 00:13:52.849 10:27:46 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.849 10:27:46 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:52.849 10:27:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.849 10:27:46 -- common/autotest_common.sh@10 -- # set +x 00:13:52.849 ************************************ 00:13:52.849 START TEST bdev_json_nonarray 00:13:52.849 ************************************ 00:13:52.849 10:27:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.849 [2024-07-12 10:27:46.587126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:52.849 [2024-07-12 10:27:46.587543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113532 ] 00:13:52.849 [2024-07-12 10:27:46.757287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.107 [2024-07-12 10:27:46.914813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.107 [2024-07-12 10:27:46.915297] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:53.107 [2024-07-12 10:27:46.915532] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.367 ************************************ 00:13:53.367 END TEST bdev_json_nonarray 00:13:53.367 ************************************ 00:13:53.367 00:13:53.367 real 0m0.705s 00:13:53.367 user 0m0.465s 00:13:53.367 sys 0m0.138s 00:13:53.367 10:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.367 10:27:47 -- common/autotest_common.sh@10 -- # set +x 00:13:53.367 10:27:47 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:53.367 10:27:47 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:53.367 10:27:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:53.367 10:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.367 10:27:47 -- common/autotest_common.sh@10 -- # set +x 00:13:53.367 ************************************ 00:13:53.367 START TEST bdev_qos 00:13:53.367 ************************************ 00:13:53.367 10:27:47 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:53.625 10:27:47 -- bdev/blockdev.sh@444 -- # QOS_PID=113570 00:13:53.625 Process qos testing pid: 113570 00:13:53.625 10:27:47 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 113570' 00:13:53.625 10:27:47 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:53.625 10:27:47 -- bdev/blockdev.sh@447 -- # waitforlisten 113570 00:13:53.625 10:27:47 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:53.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.625 10:27:47 -- common/autotest_common.sh@819 -- # '[' -z 113570 ']' 00:13:53.625 10:27:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.625 10:27:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.625 10:27:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.625 10:27:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.625 10:27:47 -- common/autotest_common.sh@10 -- # set +x 00:13:53.625 [2024-07-12 10:27:47.339470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:53.625 [2024-07-12 10:27:47.339681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113570 ] 00:13:53.625 [2024-07-12 10:27:47.496598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.889 [2024-07-12 10:27:47.723455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.491 10:27:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.491 10:27:48 -- common/autotest_common.sh@852 -- # return 0 00:13:54.491 10:27:48 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:54.491 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.491 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 Malloc_0 00:13:54.491 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.491 10:27:48 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:54.491 10:27:48 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:54.491 10:27:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.491 10:27:48 -- common/autotest_common.sh@889 -- # local i 00:13:54.491 10:27:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.491 10:27:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.491 10:27:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.491 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.491 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.491 10:27:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:54.491 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.491 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.749 [ 00:13:54.749 { 00:13:54.749 "name": "Malloc_0", 00:13:54.749 "aliases": [ 00:13:54.749 "17a06c26-53e4-4e22-81f1-c6abdc811a96" 00:13:54.749 ], 00:13:54.749 "product_name": "Malloc disk", 00:13:54.749 "block_size": 512, 00:13:54.749 "num_blocks": 262144, 00:13:54.749 "uuid": "17a06c26-53e4-4e22-81f1-c6abdc811a96", 00:13:54.749 "assigned_rate_limits": { 00:13:54.749 "rw_ios_per_sec": 0, 00:13:54.749 "rw_mbytes_per_sec": 0, 00:13:54.749 "r_mbytes_per_sec": 0, 00:13:54.749 "w_mbytes_per_sec": 0 00:13:54.749 }, 00:13:54.749 "claimed": false, 00:13:54.749 "zoned": false, 00:13:54.749 "supported_io_types": { 00:13:54.749 "read": true, 00:13:54.749 "write": true, 00:13:54.749 "unmap": true, 00:13:54.749 "write_zeroes": true, 00:13:54.749 "flush": true, 00:13:54.749 "reset": true, 00:13:54.749 "compare": false, 00:13:54.749 "compare_and_write": false, 00:13:54.749 "abort": true, 00:13:54.749 "nvme_admin": false, 00:13:54.749 "nvme_io": false 00:13:54.749 }, 00:13:54.749 "memory_domains": [ 00:13:54.749 { 00:13:54.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.749 "dma_device_type": 2 00:13:54.749 } 00:13:54.749 ], 00:13:54.749 "driver_specific": {} 00:13:54.749 } 00:13:54.749 ] 00:13:54.749 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.749 10:27:48 -- common/autotest_common.sh@895 -- # return 0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:54.749 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.749 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.749 Null_1 00:13:54.749 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.749 10:27:48 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:54.749 10:27:48 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:54.749 10:27:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.749 10:27:48 -- common/autotest_common.sh@889 -- # local i 00:13:54.749 10:27:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.749 10:27:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.749 10:27:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.749 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.749 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.749 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.749 10:27:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:54.749 10:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.749 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:13:54.749 [ 00:13:54.749 { 00:13:54.749 "name": "Null_1", 00:13:54.749 "aliases": [ 00:13:54.749 "306b32b9-dd84-47c3-b63d-29486cd4cdfa" 00:13:54.749 ], 00:13:54.749 "product_name": "Null disk", 00:13:54.749 "block_size": 512, 00:13:54.749 "num_blocks": 262144, 00:13:54.749 "uuid": "306b32b9-dd84-47c3-b63d-29486cd4cdfa", 00:13:54.749 "assigned_rate_limits": { 00:13:54.749 "rw_ios_per_sec": 0, 00:13:54.749 "rw_mbytes_per_sec": 0, 00:13:54.749 "r_mbytes_per_sec": 0, 00:13:54.749 "w_mbytes_per_sec": 0 00:13:54.749 }, 00:13:54.749 "claimed": false, 00:13:54.749 "zoned": false, 00:13:54.749 "supported_io_types": { 00:13:54.749 "read": true, 00:13:54.749 "write": true, 00:13:54.749 "unmap": false, 00:13:54.749 "write_zeroes": true, 00:13:54.749 "flush": false, 00:13:54.749 "reset": true, 00:13:54.749 "compare": false, 00:13:54.749 "compare_and_write": false, 00:13:54.749 "abort": true, 00:13:54.749 "nvme_admin": false, 00:13:54.749 "nvme_io": false 00:13:54.749 }, 00:13:54.749 "driver_specific": {} 00:13:54.749 } 00:13:54.749 ] 00:13:54.749 10:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.749 10:27:48 -- common/autotest_common.sh@895 -- # return 0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:54.749 10:27:48 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:54.749 10:27:48 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:54.749 10:27:48 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.749 10:27:48 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:54.749 10:27:48 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:54.749 10:27:48 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:54.749 10:27:48 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:54.749 10:27:48 -- bdev/blockdev.sh@376 -- # tail -1 00:13:54.749 Running I/O for 60 seconds... 00:14:00.017 10:27:53 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 84750.28 339001.14 0.00 0.00 342016.00 0.00 0.00 ' 00:14:00.017 10:27:53 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:00.017 10:27:53 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:00.017 10:27:53 -- bdev/blockdev.sh@378 -- # iostat_result=84750.28 00:14:00.017 10:27:53 -- bdev/blockdev.sh@383 -- # echo 84750 00:14:00.017 10:27:53 -- bdev/blockdev.sh@414 -- # io_result=84750 00:14:00.017 10:27:53 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:14:00.017 10:27:53 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:14:00.017 10:27:53 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:14:00.017 10:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.017 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:14:00.017 10:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.017 10:27:53 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:14:00.017 10:27:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:00.017 10:27:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:00.017 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:14:00.017 ************************************ 00:14:00.017 START TEST bdev_qos_iops 00:14:00.017 ************************************ 00:14:00.017 10:27:53 -- common/autotest_common.sh@1104 -- # run_qos_test 21000 IOPS Malloc_0 00:14:00.017 10:27:53 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:14:00.017 10:27:53 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:00.017 10:27:53 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:14:00.017 10:27:53 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:00.017 10:27:53 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:00.017 10:27:53 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:00.017 10:27:53 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:00.017 10:27:53 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:00.017 10:27:53 -- bdev/blockdev.sh@376 -- # tail -1 00:14:05.282 10:27:58 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 21012.87 84051.47 0.00 0.00 85344.00 0.00 0.00 ' 00:14:05.282 10:27:58 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:05.282 10:27:58 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:05.282 10:27:58 -- bdev/blockdev.sh@378 -- # iostat_result=21012.87 00:14:05.282 10:27:58 -- bdev/blockdev.sh@383 -- # echo 21012 00:14:05.282 10:27:58 -- bdev/blockdev.sh@390 -- # qos_result=21012 00:14:05.282 10:27:58 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:05.282 10:27:58 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:14:05.282 10:27:58 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:14:05.282 ************************************ 00:14:05.282 END TEST bdev_qos_iops 00:14:05.282 ************************************ 00:14:05.282 10:27:58 -- bdev/blockdev.sh@398 -- # '[' 21012 -lt 18900 ']' 00:14:05.282 10:27:58 -- bdev/blockdev.sh@398 -- # '[' 21012 -gt 23100 ']' 00:14:05.282 00:14:05.282 real 0m5.190s 00:14:05.282 user 0m0.108s 00:14:05.282 sys 0m0.029s 00:14:05.282 10:27:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.282 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:14:05.282 10:27:58 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:05.282 10:27:58 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:05.282 10:27:58 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:05.282 10:27:58 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:05.282 10:27:58 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.282 10:27:58 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:05.282 10:27:58 -- bdev/blockdev.sh@376 -- # tail -1 00:14:10.549 10:28:04 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30138.23 120552.91 0.00 0.00 121856.00 0.00 0.00 ' 00:14:10.549 10:28:04 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:10.549 10:28:04 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:10.549 10:28:04 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:10.549 10:28:04 -- bdev/blockdev.sh@380 -- # iostat_result=121856.00 00:14:10.549 10:28:04 -- bdev/blockdev.sh@383 -- # echo 121856 00:14:10.549 10:28:04 -- bdev/blockdev.sh@425 -- # bw_limit=121856 00:14:10.549 10:28:04 -- bdev/blockdev.sh@426 -- # bw_limit=11 00:14:10.549 10:28:04 -- bdev/blockdev.sh@427 -- # '[' 11 -lt 2 ']' 00:14:10.549 10:28:04 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:14:10.549 10:28:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.549 10:28:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.549 10:28:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.549 10:28:04 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:14:10.549 10:28:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:10.549 10:28:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.549 10:28:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.549 ************************************ 00:14:10.549 START TEST bdev_qos_bw 00:14:10.549 ************************************ 00:14:10.549 10:28:04 -- common/autotest_common.sh@1104 -- # run_qos_test 11 BANDWIDTH Null_1 00:14:10.549 10:28:04 -- bdev/blockdev.sh@387 -- # local qos_limit=11 00:14:10.549 10:28:04 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:10.549 10:28:04 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:10.549 10:28:04 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:10.549 10:28:04 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:10.549 10:28:04 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:10.549 10:28:04 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.549 10:28:04 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:10.549 10:28:04 -- bdev/blockdev.sh@376 -- # tail -1 00:14:15.812 10:28:09 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2813.25 11253.02 0.00 0.00 11500.00 0.00 0.00 ' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@380 -- # iostat_result=11500.00 00:14:15.812 10:28:09 -- bdev/blockdev.sh@383 -- # echo 11500 00:14:15.812 10:28:09 -- bdev/blockdev.sh@390 -- # qos_result=11500 00:14:15.812 10:28:09 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@392 -- # qos_limit=11264 00:14:15.812 10:28:09 -- bdev/blockdev.sh@394 -- # lower_limit=10137 00:14:15.812 10:28:09 -- bdev/blockdev.sh@395 -- # upper_limit=12390 00:14:15.812 10:28:09 -- bdev/blockdev.sh@398 -- # '[' 11500 -lt 10137 ']' 00:14:15.812 10:28:09 -- bdev/blockdev.sh@398 -- # '[' 11500 -gt 12390 ']' 00:14:15.812 00:14:15.812 real 0m5.209s 00:14:15.812 user 0m0.101s 00:14:15.812 sys 0m0.033s 00:14:15.812 10:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.812 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:15.812 ************************************ 00:14:15.812 END TEST bdev_qos_bw 00:14:15.812 ************************************ 00:14:15.812 10:28:09 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:15.812 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.812 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:15.812 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.812 10:28:09 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:15.812 10:28:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:15.812 10:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.812 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:15.812 ************************************ 00:14:15.812 START TEST bdev_qos_ro_bw 00:14:15.812 ************************************ 00:14:15.812 10:28:09 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:15.812 10:28:09 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:15.812 10:28:09 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:15.812 10:28:09 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:15.812 10:28:09 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:15.812 10:28:09 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:15.812 10:28:09 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:15.812 10:28:09 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.812 10:28:09 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:15.812 10:28:09 -- bdev/blockdev.sh@376 -- # tail -1 00:14:21.097 10:28:14 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.92 2047.67 0.00 0.00 2068.00 0.00 0.00 ' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:14:21.097 10:28:14 -- bdev/blockdev.sh@383 -- # echo 2068 00:14:21.097 10:28:14 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:14:21.097 10:28:14 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:21.097 10:28:14 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:21.097 10:28:14 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:21.097 10:28:14 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:14:21.097 10:28:14 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:14:21.097 00:14:21.097 real 0m5.165s 00:14:21.097 user 0m0.103s 00:14:21.097 sys 0m0.032s 00:14:21.097 10:28:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.097 10:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:21.097 ************************************ 00:14:21.097 END TEST bdev_qos_ro_bw 00:14:21.097 ************************************ 00:14:21.097 10:28:14 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:21.097 10:28:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.097 10:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:21.406 10:28:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.406 10:28:15 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:21.406 10:28:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.406 10:28:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.406 00:14:21.406 Latency(us) 00:14:21.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.406 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:21.407 Malloc_0 : 26.59 28767.07 112.37 0.00 0.00 8816.28 1630.95 503316.48 00:14:21.407 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:21.407 Null_1 : 26.78 28906.36 112.92 0.00 0.00 8839.38 573.44 182070.92 00:14:21.407 =================================================================================================================== 00:14:21.407 Total : 57673.42 225.29 0.00 0.00 8827.90 573.44 503316.48 00:14:21.407 0 00:14:21.407 10:28:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.407 10:28:15 -- bdev/blockdev.sh@459 -- # killprocess 113570 00:14:21.407 10:28:15 -- common/autotest_common.sh@926 -- # '[' -z 113570 ']' 00:14:21.407 10:28:15 -- common/autotest_common.sh@930 -- # kill -0 113570 00:14:21.407 10:28:15 -- common/autotest_common.sh@931 -- # uname 00:14:21.407 10:28:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.407 10:28:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113570 00:14:21.678 10:28:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:21.678 killing process with pid 113570 00:14:21.678 Received shutdown signal, test time was about 26.809544 seconds 00:14:21.678 00:14:21.678 Latency(us) 00:14:21.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.678 =================================================================================================================== 00:14:21.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.678 10:28:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:21.678 10:28:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113570' 00:14:21.678 10:28:15 -- common/autotest_common.sh@945 -- # kill 113570 00:14:21.678 10:28:15 -- common/autotest_common.sh@950 -- # wait 113570 00:14:22.614 10:28:16 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:22.615 00:14:22.615 real 0m29.123s 00:14:22.615 user 0m29.751s 00:14:22.615 sys 0m0.635s 00:14:22.615 ************************************ 00:14:22.615 END TEST bdev_qos 00:14:22.615 ************************************ 00:14:22.615 10:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.615 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:14:22.615 10:28:16 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:22.615 10:28:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:22.615 10:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.615 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:14:22.615 ************************************ 00:14:22.615 START TEST bdev_qd_sampling 00:14:22.615 ************************************ 00:14:22.615 Process bdev QD sampling period testing pid: 114086 00:14:22.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.615 10:28:16 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:22.615 10:28:16 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:22.615 10:28:16 -- bdev/blockdev.sh@539 -- # QD_PID=114086 00:14:22.615 10:28:16 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 114086' 00:14:22.615 10:28:16 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:22.615 10:28:16 -- bdev/blockdev.sh@542 -- # waitforlisten 114086 00:14:22.615 10:28:16 -- common/autotest_common.sh@819 -- # '[' -z 114086 ']' 00:14:22.615 10:28:16 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:22.615 10:28:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.615 10:28:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.615 10:28:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.615 10:28:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.615 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:14:22.615 [2024-07-12 10:28:16.530960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:22.615 [2024-07-12 10:28:16.531542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114086 ] 00:14:22.874 [2024-07-12 10:28:16.706561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.133 [2024-07-12 10:28:16.954566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.133 [2024-07-12 10:28:16.954586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.701 10:28:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.701 10:28:17 -- common/autotest_common.sh@852 -- # return 0 00:14:23.701 10:28:17 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:23.701 10:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.701 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:14:23.701 Malloc_QD 00:14:23.701 10:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.701 10:28:17 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:23.701 10:28:17 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:23.701 10:28:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:23.701 10:28:17 -- common/autotest_common.sh@889 -- # local i 00:14:23.701 10:28:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:23.701 10:28:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:23.701 10:28:17 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:23.701 10:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.701 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:14:23.701 10:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.701 10:28:17 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:23.701 10:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.701 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:14:23.701 [ 00:14:23.701 { 00:14:23.701 "name": "Malloc_QD", 00:14:23.701 "aliases": [ 00:14:23.701 "96981424-9fbd-4aa3-ab03-96202b9ff523" 00:14:23.701 ], 00:14:23.701 "product_name": "Malloc disk", 00:14:23.701 "block_size": 512, 00:14:23.701 "num_blocks": 262144, 00:14:23.701 "uuid": "96981424-9fbd-4aa3-ab03-96202b9ff523", 00:14:23.701 "assigned_rate_limits": { 00:14:23.701 "rw_ios_per_sec": 0, 00:14:23.701 "rw_mbytes_per_sec": 0, 00:14:23.701 "r_mbytes_per_sec": 0, 00:14:23.701 "w_mbytes_per_sec": 0 00:14:23.701 }, 00:14:23.701 "claimed": false, 00:14:23.701 "zoned": false, 00:14:23.701 "supported_io_types": { 00:14:23.701 "read": true, 00:14:23.701 "write": true, 00:14:23.701 "unmap": true, 00:14:23.701 "write_zeroes": true, 00:14:23.701 "flush": true, 00:14:23.701 "reset": true, 00:14:23.701 "compare": false, 00:14:23.701 "compare_and_write": false, 00:14:23.701 "abort": true, 00:14:23.701 "nvme_admin": false, 00:14:23.701 "nvme_io": false 00:14:23.701 }, 00:14:23.701 "memory_domains": [ 00:14:23.701 { 00:14:23.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.701 "dma_device_type": 2 00:14:23.701 } 00:14:23.701 ], 00:14:23.701 "driver_specific": {} 00:14:23.701 } 00:14:23.701 ] 00:14:23.701 10:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.701 10:28:17 -- common/autotest_common.sh@895 -- # return 0 00:14:23.701 10:28:17 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:23.701 10:28:17 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:23.960 Running I/O for 5 seconds... 00:14:25.865 10:28:19 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:25.865 10:28:19 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:25.865 10:28:19 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:25.865 10:28:19 -- bdev/blockdev.sh@519 -- # local iostats 00:14:25.865 10:28:19 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:25.865 10:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.865 10:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:25.865 10:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.865 10:28:19 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:25.865 10:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.865 10:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:25.865 10:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.865 10:28:19 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:25.865 "tick_rate": 2200000000, 00:14:25.865 "ticks": 1730712015167, 00:14:25.865 "bdevs": [ 00:14:25.865 { 00:14:25.865 "name": "Malloc_QD", 00:14:25.865 "bytes_read": 539005440, 00:14:25.865 "num_read_ops": 131587, 00:14:25.865 "bytes_written": 0, 00:14:25.865 "num_write_ops": 0, 00:14:25.865 "bytes_unmapped": 0, 00:14:25.865 "num_unmap_ops": 0, 00:14:25.865 "bytes_copied": 0, 00:14:25.865 "num_copy_ops": 0, 00:14:25.865 "read_latency_ticks": 2182989330063, 00:14:25.865 "max_read_latency_ticks": 20037674, 00:14:25.865 "min_read_latency_ticks": 446824, 00:14:25.865 "write_latency_ticks": 0, 00:14:25.865 "max_write_latency_ticks": 0, 00:14:25.865 "min_write_latency_ticks": 0, 00:14:25.865 "unmap_latency_ticks": 0, 00:14:25.865 "max_unmap_latency_ticks": 0, 00:14:25.865 "min_unmap_latency_ticks": 0, 00:14:25.865 "copy_latency_ticks": 0, 00:14:25.865 "max_copy_latency_ticks": 0, 00:14:25.865 "min_copy_latency_ticks": 0, 00:14:25.865 "io_error": {}, 00:14:25.865 "queue_depth_polling_period": 10, 00:14:25.865 "queue_depth": 512, 00:14:25.865 "io_time": 20, 00:14:25.865 "weighted_io_time": 10240 00:14:25.865 } 00:14:25.865 ] 00:14:25.865 }' 00:14:25.865 10:28:19 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:25.865 10:28:19 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:25.865 10:28:19 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:25.865 10:28:19 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:25.865 10:28:19 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:25.865 10:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.865 10:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:25.865 00:14:25.865 Latency(us) 00:14:25.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.865 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:25.865 Malloc_QD : 2.03 33371.13 130.36 0.00 0.00 7647.70 1452.22 9115.46 00:14:25.865 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:25.865 Malloc_QD : 2.03 34366.94 134.25 0.00 0.00 7427.99 670.25 8638.84 00:14:25.865 =================================================================================================================== 00:14:25.865 Total : 67738.07 264.60 0.00 0.00 7536.21 670.25 9115.46 00:14:26.125 0 00:14:26.125 10:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.125 10:28:19 -- bdev/blockdev.sh@552 -- # killprocess 114086 00:14:26.125 10:28:19 -- common/autotest_common.sh@926 -- # '[' -z 114086 ']' 00:14:26.125 10:28:19 -- common/autotest_common.sh@930 -- # kill -0 114086 00:14:26.125 10:28:19 -- common/autotest_common.sh@931 -- # uname 00:14:26.125 10:28:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.125 10:28:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114086 00:14:26.125 killing process with pid 114086 00:14:26.125 Received shutdown signal, test time was about 2.162860 seconds 00:14:26.125 00:14:26.125 Latency(us) 00:14:26.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.125 =================================================================================================================== 00:14:26.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.125 10:28:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:26.125 10:28:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:26.125 10:28:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114086' 00:14:26.125 10:28:19 -- common/autotest_common.sh@945 -- # kill 114086 00:14:26.125 10:28:19 -- common/autotest_common.sh@950 -- # wait 114086 00:14:27.502 ************************************ 00:14:27.502 END TEST bdev_qd_sampling 00:14:27.502 ************************************ 00:14:27.502 10:28:21 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:27.502 00:14:27.502 real 0m4.551s 00:14:27.502 user 0m8.335s 00:14:27.502 sys 0m0.421s 00:14:27.502 10:28:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.502 10:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 10:28:21 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:27.502 10:28:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:27.502 10:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.502 10:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 ************************************ 00:14:27.502 START TEST bdev_error 00:14:27.502 ************************************ 00:14:27.502 10:28:21 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:27.502 10:28:21 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:27.502 10:28:21 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:27.502 10:28:21 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:27.502 10:28:21 -- bdev/blockdev.sh@470 -- # ERR_PID=114198 00:14:27.502 Process error testing pid: 114198 00:14:27.502 10:28:21 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 114198' 00:14:27.502 10:28:21 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:27.502 10:28:21 -- bdev/blockdev.sh@472 -- # waitforlisten 114198 00:14:27.502 10:28:21 -- common/autotest_common.sh@819 -- # '[' -z 114198 ']' 00:14:27.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.502 10:28:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.502 10:28:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.503 10:28:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.503 10:28:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.503 10:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:27.503 [2024-07-12 10:28:21.135757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:27.503 [2024-07-12 10:28:21.135940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114198 ] 00:14:27.503 [2024-07-12 10:28:21.301323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.761 [2024-07-12 10:28:21.457089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.328 10:28:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.328 10:28:21 -- common/autotest_common.sh@852 -- # return 0 00:14:28.328 10:28:21 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:28.328 10:28:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.328 10:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:28.328 Dev_1 00:14:28.328 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.328 10:28:22 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:28.328 10:28:22 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:28.328 10:28:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.328 10:28:22 -- common/autotest_common.sh@889 -- # local i 00:14:28.328 10:28:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.328 10:28:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.328 10:28:22 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:28.328 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.328 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.328 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.328 10:28:22 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:28.328 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.328 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.328 [ 00:14:28.328 { 00:14:28.328 "name": "Dev_1", 00:14:28.328 "aliases": [ 00:14:28.328 "5e1b8a91-6c77-47bc-94ef-249a9887c0a5" 00:14:28.328 ], 00:14:28.328 "product_name": "Malloc disk", 00:14:28.328 "block_size": 512, 00:14:28.328 "num_blocks": 262144, 00:14:28.328 "uuid": "5e1b8a91-6c77-47bc-94ef-249a9887c0a5", 00:14:28.328 "assigned_rate_limits": { 00:14:28.328 "rw_ios_per_sec": 0, 00:14:28.328 "rw_mbytes_per_sec": 0, 00:14:28.328 "r_mbytes_per_sec": 0, 00:14:28.328 "w_mbytes_per_sec": 0 00:14:28.328 }, 00:14:28.328 "claimed": false, 00:14:28.328 "zoned": false, 00:14:28.328 "supported_io_types": { 00:14:28.328 "read": true, 00:14:28.328 "write": true, 00:14:28.328 "unmap": true, 00:14:28.328 "write_zeroes": true, 00:14:28.328 "flush": true, 00:14:28.328 "reset": true, 00:14:28.328 "compare": false, 00:14:28.328 "compare_and_write": false, 00:14:28.328 "abort": true, 00:14:28.328 "nvme_admin": false, 00:14:28.328 "nvme_io": false 00:14:28.328 }, 00:14:28.328 "memory_domains": [ 00:14:28.328 { 00:14:28.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.328 "dma_device_type": 2 00:14:28.328 } 00:14:28.328 ], 00:14:28.328 "driver_specific": {} 00:14:28.328 } 00:14:28.328 ] 00:14:28.328 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.328 10:28:22 -- common/autotest_common.sh@895 -- # return 0 00:14:28.328 10:28:22 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:28.328 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.328 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.328 true 00:14:28.328 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.328 10:28:22 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:28.328 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.328 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.586 Dev_2 00:14:28.586 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.586 10:28:22 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:28.586 10:28:22 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:28.586 10:28:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.586 10:28:22 -- common/autotest_common.sh@889 -- # local i 00:14:28.586 10:28:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.586 10:28:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.586 10:28:22 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:28.586 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.587 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.587 10:28:22 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:28.587 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.587 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 [ 00:14:28.587 { 00:14:28.587 "name": "Dev_2", 00:14:28.587 "aliases": [ 00:14:28.587 "f3fe2e47-f9f7-4623-ae71-ca350059446b" 00:14:28.587 ], 00:14:28.587 "product_name": "Malloc disk", 00:14:28.587 "block_size": 512, 00:14:28.587 "num_blocks": 262144, 00:14:28.587 "uuid": "f3fe2e47-f9f7-4623-ae71-ca350059446b", 00:14:28.587 "assigned_rate_limits": { 00:14:28.587 "rw_ios_per_sec": 0, 00:14:28.587 "rw_mbytes_per_sec": 0, 00:14:28.587 "r_mbytes_per_sec": 0, 00:14:28.587 "w_mbytes_per_sec": 0 00:14:28.587 }, 00:14:28.587 "claimed": false, 00:14:28.587 "zoned": false, 00:14:28.587 "supported_io_types": { 00:14:28.587 "read": true, 00:14:28.587 "write": true, 00:14:28.587 "unmap": true, 00:14:28.587 "write_zeroes": true, 00:14:28.587 "flush": true, 00:14:28.587 "reset": true, 00:14:28.587 "compare": false, 00:14:28.587 "compare_and_write": false, 00:14:28.587 "abort": true, 00:14:28.587 "nvme_admin": false, 00:14:28.587 "nvme_io": false 00:14:28.587 }, 00:14:28.587 "memory_domains": [ 00:14:28.587 { 00:14:28.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.587 "dma_device_type": 2 00:14:28.587 } 00:14:28.587 ], 00:14:28.587 "driver_specific": {} 00:14:28.587 } 00:14:28.587 ] 00:14:28.587 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.587 10:28:22 -- common/autotest_common.sh@895 -- # return 0 00:14:28.587 10:28:22 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:28.587 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.587 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.587 10:28:22 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:28.587 10:28:22 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:28.587 Running I/O for 5 seconds... 00:14:29.520 10:28:23 -- bdev/blockdev.sh@485 -- # kill -0 114198 00:14:29.520 Process is existed as continue on error is set. Pid: 114198 00:14:29.520 10:28:23 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 114198' 00:14:29.520 10:28:23 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:29.520 10:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.520 10:28:23 -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 10:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.520 10:28:23 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:29.520 10:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.520 10:28:23 -- common/autotest_common.sh@10 -- # set +x 00:14:29.520 Timeout while waiting for response: 00:14:29.520 00:14:29.520 00:14:29.778 10:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.778 10:28:23 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:33.957 00:14:33.957 Latency(us) 00:14:33.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.957 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:33.957 EE_Dev_1 : 0.93 47699.43 186.33 5.36 0.00 333.02 115.90 592.06 00:14:33.957 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:33.957 Dev_2 : 5.00 98804.22 385.95 0.00 0.00 159.55 51.67 255471.24 00:14:33.957 =================================================================================================================== 00:14:33.958 Total : 146503.65 572.28 5.36 0.00 173.89 51.67 255471.24 00:14:34.892 10:28:28 -- bdev/blockdev.sh@497 -- # killprocess 114198 00:14:34.892 10:28:28 -- common/autotest_common.sh@926 -- # '[' -z 114198 ']' 00:14:34.892 10:28:28 -- common/autotest_common.sh@930 -- # kill -0 114198 00:14:34.892 10:28:28 -- common/autotest_common.sh@931 -- # uname 00:14:34.892 10:28:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.892 10:28:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114198 00:14:34.892 10:28:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:34.892 killing process with pid 114198 00:14:34.892 10:28:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:34.892 10:28:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114198' 00:14:34.892 10:28:28 -- common/autotest_common.sh@945 -- # kill 114198 00:14:34.892 Received shutdown signal, test time was about 5.000000 seconds 00:14:34.892 00:14:34.892 Latency(us) 00:14:34.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.892 =================================================================================================================== 00:14:34.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.892 10:28:28 -- common/autotest_common.sh@950 -- # wait 114198 00:14:36.268 10:28:29 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:36.268 10:28:29 -- bdev/blockdev.sh@501 -- # ERR_PID=114306 00:14:36.268 Process error testing pid: 114306 00:14:36.268 10:28:29 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 114306' 00:14:36.268 10:28:29 -- bdev/blockdev.sh@503 -- # waitforlisten 114306 00:14:36.268 10:28:29 -- common/autotest_common.sh@819 -- # '[' -z 114306 ']' 00:14:36.268 10:28:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.268 10:28:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:36.268 10:28:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.268 10:28:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:36.268 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.268 [2024-07-12 10:28:29.803658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.268 [2024-07-12 10:28:29.803814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114306 ] 00:14:36.268 [2024-07-12 10:28:29.949372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.268 [2024-07-12 10:28:30.146934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.834 10:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.834 10:28:30 -- common/autotest_common.sh@852 -- # return 0 00:14:36.834 10:28:30 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:36.834 10:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.834 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 Dev_1 00:14:37.092 10:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.092 10:28:30 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:37.092 10:28:30 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:37.092 10:28:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:37.092 10:28:30 -- common/autotest_common.sh@889 -- # local i 00:14:37.092 10:28:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:37.092 10:28:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:37.092 10:28:30 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:37.092 10:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.092 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 10:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.092 10:28:30 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:37.092 10:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.092 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 [ 00:14:37.092 { 00:14:37.092 "name": "Dev_1", 00:14:37.092 "aliases": [ 00:14:37.092 "0af7d2fc-55e5-43b9-b3ce-073066d8e2a6" 00:14:37.092 ], 00:14:37.092 "product_name": "Malloc disk", 00:14:37.092 "block_size": 512, 00:14:37.092 "num_blocks": 262144, 00:14:37.092 "uuid": "0af7d2fc-55e5-43b9-b3ce-073066d8e2a6", 00:14:37.092 "assigned_rate_limits": { 00:14:37.092 "rw_ios_per_sec": 0, 00:14:37.092 "rw_mbytes_per_sec": 0, 00:14:37.092 "r_mbytes_per_sec": 0, 00:14:37.092 "w_mbytes_per_sec": 0 00:14:37.092 }, 00:14:37.092 "claimed": false, 00:14:37.092 "zoned": false, 00:14:37.092 "supported_io_types": { 00:14:37.092 "read": true, 00:14:37.093 "write": true, 00:14:37.093 "unmap": true, 00:14:37.093 "write_zeroes": true, 00:14:37.093 "flush": true, 00:14:37.093 "reset": true, 00:14:37.093 "compare": false, 00:14:37.093 "compare_and_write": false, 00:14:37.093 "abort": true, 00:14:37.093 "nvme_admin": false, 00:14:37.093 "nvme_io": false 00:14:37.093 }, 00:14:37.093 "memory_domains": [ 00:14:37.093 { 00:14:37.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.093 "dma_device_type": 2 00:14:37.093 } 00:14:37.093 ], 00:14:37.093 "driver_specific": {} 00:14:37.093 } 00:14:37.093 ] 00:14:37.093 10:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.093 10:28:30 -- common/autotest_common.sh@895 -- # return 0 00:14:37.093 10:28:30 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:37.093 10:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.093 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:37.093 true 00:14:37.093 10:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.093 10:28:30 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:37.093 10:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.093 10:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:37.351 Dev_2 00:14:37.351 10:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.351 10:28:31 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:37.351 10:28:31 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:37.351 10:28:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:37.351 10:28:31 -- common/autotest_common.sh@889 -- # local i 00:14:37.351 10:28:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:37.351 10:28:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:37.351 10:28:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:37.351 10:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.351 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:14:37.351 10:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.351 10:28:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:37.352 10:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.352 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:14:37.352 [ 00:14:37.352 { 00:14:37.352 "name": "Dev_2", 00:14:37.352 "aliases": [ 00:14:37.352 "a9bc440b-4c8f-4f04-b290-2d8ed54284df" 00:14:37.352 ], 00:14:37.352 "product_name": "Malloc disk", 00:14:37.352 "block_size": 512, 00:14:37.352 "num_blocks": 262144, 00:14:37.352 "uuid": "a9bc440b-4c8f-4f04-b290-2d8ed54284df", 00:14:37.352 "assigned_rate_limits": { 00:14:37.352 "rw_ios_per_sec": 0, 00:14:37.352 "rw_mbytes_per_sec": 0, 00:14:37.352 "r_mbytes_per_sec": 0, 00:14:37.352 "w_mbytes_per_sec": 0 00:14:37.352 }, 00:14:37.352 "claimed": false, 00:14:37.352 "zoned": false, 00:14:37.352 "supported_io_types": { 00:14:37.352 "read": true, 00:14:37.352 "write": true, 00:14:37.352 "unmap": true, 00:14:37.352 "write_zeroes": true, 00:14:37.352 "flush": true, 00:14:37.352 "reset": true, 00:14:37.352 "compare": false, 00:14:37.352 "compare_and_write": false, 00:14:37.352 "abort": true, 00:14:37.352 "nvme_admin": false, 00:14:37.352 "nvme_io": false 00:14:37.352 }, 00:14:37.352 "memory_domains": [ 00:14:37.352 { 00:14:37.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.352 "dma_device_type": 2 00:14:37.352 } 00:14:37.352 ], 00:14:37.352 "driver_specific": {} 00:14:37.352 } 00:14:37.352 ] 00:14:37.352 10:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.352 10:28:31 -- common/autotest_common.sh@895 -- # return 0 00:14:37.352 10:28:31 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:37.352 10:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.352 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:14:37.352 10:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.352 10:28:31 -- bdev/blockdev.sh@513 -- # NOT wait 114306 00:14:37.352 10:28:31 -- common/autotest_common.sh@640 -- # local es=0 00:14:37.352 10:28:31 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:37.352 10:28:31 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 114306 00:14:37.352 10:28:31 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:37.352 10:28:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.352 10:28:31 -- common/autotest_common.sh@632 -- # type -t wait 00:14:37.352 10:28:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.352 10:28:31 -- common/autotest_common.sh@643 -- # wait 114306 00:14:37.352 Running I/O for 5 seconds... 00:14:37.352 task offset: 223352 on job bdev=EE_Dev_1 fails 00:14:37.352 00:14:37.352 Latency(us) 00:14:37.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.352 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:37.352 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:37.352 EE_Dev_1 : 0.00 34003.09 132.82 7727.98 0.00 316.60 116.83 565.99 00:14:37.352 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:37.352 Dev_2 : 0.00 23138.11 90.38 0.00 0.00 490.84 111.24 904.84 00:14:37.352 =================================================================================================================== 00:14:37.352 Total : 57141.20 223.21 7727.98 0.00 411.10 111.24 904.84 00:14:37.352 [2024-07-12 10:28:31.153879] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:37.352 request: 00:14:37.352 { 00:14:37.352 "method": "perform_tests", 00:14:37.352 "req_id": 1 00:14:37.352 } 00:14:37.352 Got JSON-RPC error response 00:14:37.352 response: 00:14:37.352 { 00:14:37.352 "code": -32603, 00:14:37.352 "message": "bdevperf failed with error Operation not permitted" 00:14:37.352 } 00:14:38.727 10:28:32 -- common/autotest_common.sh@643 -- # es=255 00:14:38.727 10:28:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:38.727 10:28:32 -- common/autotest_common.sh@652 -- # es=127 00:14:38.727 10:28:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:38.727 10:28:32 -- common/autotest_common.sh@660 -- # es=1 00:14:38.727 10:28:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:38.727 00:14:38.727 real 0m11.514s 00:14:38.727 user 0m11.622s 00:14:38.727 sys 0m0.786s 00:14:38.727 ************************************ 00:14:38.727 END TEST bdev_error 00:14:38.727 ************************************ 00:14:38.727 10:28:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.728 10:28:32 -- common/autotest_common.sh@10 -- # set +x 00:14:38.728 10:28:32 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:38.728 10:28:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:38.728 10:28:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.728 10:28:32 -- common/autotest_common.sh@10 -- # set +x 00:14:38.728 ************************************ 00:14:38.728 START TEST bdev_stat 00:14:38.728 ************************************ 00:14:38.728 10:28:32 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:38.728 10:28:32 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:38.728 10:28:32 -- bdev/blockdev.sh@594 -- # STAT_PID=114387 00:14:38.728 Process Bdev IO statistics testing pid: 114387 00:14:38.728 10:28:32 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 114387' 00:14:38.728 10:28:32 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:38.728 10:28:32 -- bdev/blockdev.sh@597 -- # waitforlisten 114387 00:14:38.728 10:28:32 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:38.728 10:28:32 -- common/autotest_common.sh@819 -- # '[' -z 114387 ']' 00:14:38.728 10:28:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.728 10:28:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.728 10:28:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.728 10:28:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.728 10:28:32 -- common/autotest_common.sh@10 -- # set +x 00:14:38.986 [2024-07-12 10:28:32.710087] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:38.986 [2024-07-12 10:28:32.710314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114387 ] 00:14:38.986 [2024-07-12 10:28:32.887719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.278 [2024-07-12 10:28:33.115693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.278 [2024-07-12 10:28:33.115721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.844 10:28:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.844 10:28:33 -- common/autotest_common.sh@852 -- # return 0 00:14:39.844 10:28:33 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:39.844 10:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.844 10:28:33 -- common/autotest_common.sh@10 -- # set +x 00:14:39.844 Malloc_STAT 00:14:39.844 10:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.844 10:28:33 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:39.844 10:28:33 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:39.844 10:28:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.844 10:28:33 -- common/autotest_common.sh@889 -- # local i 00:14:39.844 10:28:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.844 10:28:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.844 10:28:33 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.844 10:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.844 10:28:33 -- common/autotest_common.sh@10 -- # set +x 00:14:39.844 10:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.844 10:28:33 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:39.844 10:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.844 10:28:33 -- common/autotest_common.sh@10 -- # set +x 00:14:39.844 [ 00:14:39.844 { 00:14:39.844 "name": "Malloc_STAT", 00:14:39.844 "aliases": [ 00:14:39.844 "daed6e9c-03d3-4b0e-9559-5df30219179c" 00:14:39.844 ], 00:14:39.844 "product_name": "Malloc disk", 00:14:39.844 "block_size": 512, 00:14:39.844 "num_blocks": 262144, 00:14:39.844 "uuid": "daed6e9c-03d3-4b0e-9559-5df30219179c", 00:14:39.844 "assigned_rate_limits": { 00:14:39.844 "rw_ios_per_sec": 0, 00:14:39.844 "rw_mbytes_per_sec": 0, 00:14:39.844 "r_mbytes_per_sec": 0, 00:14:39.844 "w_mbytes_per_sec": 0 00:14:39.844 }, 00:14:39.844 "claimed": false, 00:14:39.844 "zoned": false, 00:14:39.844 "supported_io_types": { 00:14:39.844 "read": true, 00:14:39.844 "write": true, 00:14:39.844 "unmap": true, 00:14:39.844 "write_zeroes": true, 00:14:39.844 "flush": true, 00:14:39.844 "reset": true, 00:14:39.844 "compare": false, 00:14:39.844 "compare_and_write": false, 00:14:39.844 "abort": true, 00:14:39.844 "nvme_admin": false, 00:14:39.844 "nvme_io": false 00:14:39.844 }, 00:14:39.844 "memory_domains": [ 00:14:39.844 { 00:14:39.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.844 "dma_device_type": 2 00:14:39.844 } 00:14:39.844 ], 00:14:39.844 "driver_specific": {} 00:14:39.844 } 00:14:39.844 ] 00:14:39.844 10:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.844 10:28:33 -- common/autotest_common.sh@895 -- # return 0 00:14:40.102 10:28:33 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:40.102 10:28:33 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:40.102 Running I/O for 10 seconds... 00:14:42.003 10:28:35 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:42.003 10:28:35 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:42.003 10:28:35 -- bdev/blockdev.sh@558 -- # local iostats 00:14:42.003 10:28:35 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:42.003 10:28:35 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:42.003 10:28:35 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:42.003 10:28:35 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:42.003 10:28:35 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:42.003 10:28:35 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:42.003 10:28:35 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:42.003 10:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.003 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.003 10:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.003 10:28:35 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:42.003 "tick_rate": 2200000000, 00:14:42.003 "ticks": 1766270256127, 00:14:42.003 "bdevs": [ 00:14:42.003 { 00:14:42.003 "name": "Malloc_STAT", 00:14:42.003 "bytes_read": 536908288, 00:14:42.003 "num_read_ops": 131075, 00:14:42.003 "bytes_written": 0, 00:14:42.003 "num_write_ops": 0, 00:14:42.003 "bytes_unmapped": 0, 00:14:42.003 "num_unmap_ops": 0, 00:14:42.003 "bytes_copied": 0, 00:14:42.003 "num_copy_ops": 0, 00:14:42.003 "read_latency_ticks": 2145381493786, 00:14:42.003 "max_read_latency_ticks": 20802736, 00:14:42.003 "min_read_latency_ticks": 386998, 00:14:42.003 "write_latency_ticks": 0, 00:14:42.003 "max_write_latency_ticks": 0, 00:14:42.003 "min_write_latency_ticks": 0, 00:14:42.003 "unmap_latency_ticks": 0, 00:14:42.003 "max_unmap_latency_ticks": 0, 00:14:42.003 "min_unmap_latency_ticks": 0, 00:14:42.003 "copy_latency_ticks": 0, 00:14:42.003 "max_copy_latency_ticks": 0, 00:14:42.003 "min_copy_latency_ticks": 0, 00:14:42.003 "io_error": {} 00:14:42.003 } 00:14:42.003 ] 00:14:42.003 }' 00:14:42.003 10:28:35 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:42.003 10:28:35 -- bdev/blockdev.sh@567 -- # io_count1=131075 00:14:42.003 10:28:35 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:42.003 10:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.003 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.003 10:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.003 10:28:35 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:42.003 "tick_rate": 2200000000, 00:14:42.003 "ticks": 1766432302559, 00:14:42.003 "name": "Malloc_STAT", 00:14:42.003 "channels": [ 00:14:42.003 { 00:14:42.003 "thread_id": 2, 00:14:42.003 "bytes_read": 277872640, 00:14:42.003 "num_read_ops": 67840, 00:14:42.003 "bytes_written": 0, 00:14:42.003 "num_write_ops": 0, 00:14:42.003 "bytes_unmapped": 0, 00:14:42.003 "num_unmap_ops": 0, 00:14:42.003 "bytes_copied": 0, 00:14:42.003 "num_copy_ops": 0, 00:14:42.003 "read_latency_ticks": 1113148141019, 00:14:42.003 "max_read_latency_ticks": 20802736, 00:14:42.003 "min_read_latency_ticks": 9670638, 00:14:42.003 "write_latency_ticks": 0, 00:14:42.003 "max_write_latency_ticks": 0, 00:14:42.003 "min_write_latency_ticks": 0, 00:14:42.003 "unmap_latency_ticks": 0, 00:14:42.003 "max_unmap_latency_ticks": 0, 00:14:42.003 "min_unmap_latency_ticks": 0, 00:14:42.003 "copy_latency_ticks": 0, 00:14:42.003 "max_copy_latency_ticks": 0, 00:14:42.003 "min_copy_latency_ticks": 0 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "thread_id": 3, 00:14:42.003 "bytes_read": 279969792, 00:14:42.003 "num_read_ops": 68352, 00:14:42.003 "bytes_written": 0, 00:14:42.003 "num_write_ops": 0, 00:14:42.003 "bytes_unmapped": 0, 00:14:42.003 "num_unmap_ops": 0, 00:14:42.003 "bytes_copied": 0, 00:14:42.003 "num_copy_ops": 0, 00:14:42.003 "read_latency_ticks": 1116239390540, 00:14:42.003 "max_read_latency_ticks": 19952458, 00:14:42.003 "min_read_latency_ticks": 12408638, 00:14:42.003 "write_latency_ticks": 0, 00:14:42.003 "max_write_latency_ticks": 0, 00:14:42.003 "min_write_latency_ticks": 0, 00:14:42.003 "unmap_latency_ticks": 0, 00:14:42.003 "max_unmap_latency_ticks": 0, 00:14:42.003 "min_unmap_latency_ticks": 0, 00:14:42.003 "copy_latency_ticks": 0, 00:14:42.003 "max_copy_latency_ticks": 0, 00:14:42.003 "min_copy_latency_ticks": 0 00:14:42.003 } 00:14:42.003 ] 00:14:42.003 }' 00:14:42.003 10:28:35 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:42.003 10:28:35 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=67840 00:14:42.003 10:28:35 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=67840 00:14:42.003 10:28:35 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:42.261 10:28:35 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=68352 00:14:42.261 10:28:35 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=136192 00:14:42.261 10:28:35 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:42.261 10:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.261 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.261 10:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.261 10:28:35 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:42.261 "tick_rate": 2200000000, 00:14:42.261 "ticks": 1766726533367, 00:14:42.261 "bdevs": [ 00:14:42.261 { 00:14:42.261 "name": "Malloc_STAT", 00:14:42.261 "bytes_read": 594579968, 00:14:42.261 "num_read_ops": 145155, 00:14:42.261 "bytes_written": 0, 00:14:42.261 "num_write_ops": 0, 00:14:42.261 "bytes_unmapped": 0, 00:14:42.261 "num_unmap_ops": 0, 00:14:42.261 "bytes_copied": 0, 00:14:42.261 "num_copy_ops": 0, 00:14:42.261 "read_latency_ticks": 2377648884759, 00:14:42.261 "max_read_latency_ticks": 20802736, 00:14:42.261 "min_read_latency_ticks": 386998, 00:14:42.261 "write_latency_ticks": 0, 00:14:42.261 "max_write_latency_ticks": 0, 00:14:42.262 "min_write_latency_ticks": 0, 00:14:42.262 "unmap_latency_ticks": 0, 00:14:42.262 "max_unmap_latency_ticks": 0, 00:14:42.262 "min_unmap_latency_ticks": 0, 00:14:42.262 "copy_latency_ticks": 0, 00:14:42.262 "max_copy_latency_ticks": 0, 00:14:42.262 "min_copy_latency_ticks": 0, 00:14:42.262 "io_error": {} 00:14:42.262 } 00:14:42.262 ] 00:14:42.262 }' 00:14:42.262 10:28:35 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:42.262 10:28:36 -- bdev/blockdev.sh@576 -- # io_count2=145155 00:14:42.262 10:28:36 -- bdev/blockdev.sh@581 -- # '[' 136192 -lt 131075 ']' 00:14:42.262 10:28:36 -- bdev/blockdev.sh@581 -- # '[' 136192 -gt 145155 ']' 00:14:42.262 10:28:36 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:42.262 10:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.262 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:14:42.262 00:14:42.262 Latency(us) 00:14:42.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.262 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:42.262 Malloc_STAT : 2.19 34497.12 134.75 0.00 0.00 7399.17 1407.53 9472.93 00:14:42.262 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:42.262 Malloc_STAT : 2.19 34244.15 133.77 0.00 0.00 7455.42 714.94 9889.98 00:14:42.262 =================================================================================================================== 00:14:42.262 Total : 68741.27 268.52 0.00 0.00 7427.20 714.94 9889.98 00:14:42.262 0 00:14:42.262 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.262 10:28:36 -- bdev/blockdev.sh@607 -- # killprocess 114387 00:14:42.262 10:28:36 -- common/autotest_common.sh@926 -- # '[' -z 114387 ']' 00:14:42.262 10:28:36 -- common/autotest_common.sh@930 -- # kill -0 114387 00:14:42.262 10:28:36 -- common/autotest_common.sh@931 -- # uname 00:14:42.262 10:28:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:42.262 10:28:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114387 00:14:42.519 10:28:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:42.519 10:28:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:42.519 killing process with pid 114387 00:14:42.519 10:28:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114387' 00:14:42.519 Received shutdown signal, test time was about 2.316270 seconds 00:14:42.519 00:14:42.519 Latency(us) 00:14:42.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.519 =================================================================================================================== 00:14:42.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.519 10:28:36 -- common/autotest_common.sh@945 -- # kill 114387 00:14:42.519 10:28:36 -- common/autotest_common.sh@950 -- # wait 114387 00:14:43.453 10:28:37 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:43.453 00:14:43.453 real 0m4.720s 00:14:43.453 user 0m8.870s 00:14:43.453 sys 0m0.428s 00:14:43.453 10:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.453 ************************************ 00:14:43.453 END TEST bdev_stat 00:14:43.453 ************************************ 00:14:43.453 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:14:43.712 10:28:37 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:43.712 10:28:37 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:43.712 10:28:37 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:43.712 10:28:37 -- bdev/blockdev.sh@809 -- # cleanup 00:14:43.712 10:28:37 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:43.712 10:28:37 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:43.712 10:28:37 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:43.712 10:28:37 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:43.712 10:28:37 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:43.712 10:28:37 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:43.712 00:14:43.712 real 2m19.678s 00:14:43.712 user 5m47.529s 00:14:43.712 sys 0m20.375s 00:14:43.712 10:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.712 ************************************ 00:14:43.712 END TEST blockdev_general 00:14:43.712 ************************************ 00:14:43.712 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:14:43.712 10:28:37 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:43.712 10:28:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:43.712 10:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.712 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:14:43.712 ************************************ 00:14:43.712 START TEST bdev_raid 00:14:43.712 ************************************ 00:14:43.712 10:28:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:43.712 * Looking for test storage... 00:14:43.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:43.712 10:28:37 -- bdev/nbd_common.sh@6 -- # set -e 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:43.712 10:28:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:43.712 10:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.712 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:14:43.712 ************************************ 00:14:43.712 START TEST raid_function_test_raid0 00:14:43.712 ************************************ 00:14:43.712 10:28:37 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@86 -- # raid_pid=114538 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114538' 00:14:43.712 Process raid pid: 114538 00:14:43.712 10:28:37 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114538 /var/tmp/spdk-raid.sock 00:14:43.712 10:28:37 -- common/autotest_common.sh@819 -- # '[' -z 114538 ']' 00:14:43.712 10:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.712 10:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:43.712 10:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.712 10:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:43.712 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:14:43.972 [2024-07-12 10:28:37.632566] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:43.972 [2024-07-12 10:28:37.632778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.972 [2024-07-12 10:28:37.797884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.230 [2024-07-12 10:28:37.979781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.487 [2024-07-12 10:28:38.168406] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.745 10:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.745 10:28:38 -- common/autotest_common.sh@852 -- # return 0 00:14:44.745 10:28:38 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:44.745 10:28:38 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:44.745 10:28:38 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:44.745 10:28:38 -- bdev/bdev_raid.sh@70 -- # cat 00:14:44.745 10:28:38 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:45.003 [2024-07-12 10:28:38.835778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:45.003 [2024-07-12 10:28:38.837695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:45.003 [2024-07-12 10:28:38.837769] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:45.003 [2024-07-12 10:28:38.837781] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:45.003 [2024-07-12 10:28:38.837926] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:45.003 [2024-07-12 10:28:38.838238] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:45.003 [2024-07-12 10:28:38.838259] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:45.003 [2024-07-12 10:28:38.838395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.003 Base_1 00:14:45.003 Base_2 00:14:45.003 10:28:38 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:45.003 10:28:38 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:45.003 10:28:38 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:45.260 10:28:39 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:45.260 10:28:39 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:45.260 10:28:39 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@12 -- # local i 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.260 10:28:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:45.518 [2024-07-12 10:28:39.267815] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:45.518 /dev/nbd0 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.518 10:28:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:45.518 10:28:39 -- common/autotest_common.sh@857 -- # local i 00:14:45.518 10:28:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:45.518 10:28:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:45.518 10:28:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:45.518 10:28:39 -- common/autotest_common.sh@861 -- # break 00:14:45.518 10:28:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:45.518 10:28:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:45.518 10:28:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.518 1+0 records in 00:14:45.518 1+0 records out 00:14:45.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282357 s, 14.5 MB/s 00:14:45.518 10:28:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.518 10:28:39 -- common/autotest_common.sh@874 -- # size=4096 00:14:45.518 10:28:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.518 10:28:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:45.518 10:28:39 -- common/autotest_common.sh@877 -- # return 0 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.518 10:28:39 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:45.518 10:28:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:45.777 { 00:14:45.777 "nbd_device": "/dev/nbd0", 00:14:45.777 "bdev_name": "raid" 00:14:45.777 } 00:14:45.777 ]' 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:45.777 { 00:14:45.777 "nbd_device": "/dev/nbd0", 00:14:45.777 "bdev_name": "raid" 00:14:45.777 } 00:14:45.777 ]' 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@65 -- # count=1 00:14:45.777 10:28:39 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:45.777 4096+0 records in 00:14:45.777 4096+0 records out 00:14:45.777 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0255748 s, 82.0 MB/s 00:14:45.777 10:28:39 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:46.035 4096+0 records in 00:14:46.035 4096+0 records out 00:14:46.035 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.244801 s, 8.6 MB/s 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:46.035 128+0 records in 00:14:46.035 128+0 records out 00:14:46.035 65536 bytes (66 kB, 64 KiB) copied, 0.000631951 s, 104 MB/s 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:46.035 2035+0 records in 00:14:46.035 2035+0 records out 00:14:46.035 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00861011 s, 121 MB/s 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:46.035 456+0 records in 00:14:46.035 456+0 records out 00:14:46.035 233472 bytes (233 kB, 228 KiB) copied, 0.0018675 s, 125 MB/s 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:46.035 10:28:39 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@51 -- # local i 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.035 10:28:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.293 [2024-07-12 10:28:40.189137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@41 -- # break 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.293 10:28:40 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.293 10:28:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:46.551 10:28:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:46.551 10:28:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:46.551 10:28:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@65 -- # true 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@65 -- # count=0 00:14:46.919 10:28:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:46.919 10:28:40 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:46.919 10:28:40 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:46.919 10:28:40 -- bdev/bdev_raid.sh@111 -- # killprocess 114538 00:14:46.919 10:28:40 -- common/autotest_common.sh@926 -- # '[' -z 114538 ']' 00:14:46.919 10:28:40 -- common/autotest_common.sh@930 -- # kill -0 114538 00:14:46.919 10:28:40 -- common/autotest_common.sh@931 -- # uname 00:14:46.919 10:28:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:46.919 10:28:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114538 00:14:46.919 10:28:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:46.919 killing process with pid 114538 00:14:46.919 10:28:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:46.919 10:28:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114538' 00:14:46.919 10:28:40 -- common/autotest_common.sh@945 -- # kill 114538 00:14:46.919 10:28:40 -- common/autotest_common.sh@950 -- # wait 114538 00:14:46.919 [2024-07-12 10:28:40.513534] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.919 [2024-07-12 10:28:40.513628] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.919 [2024-07-12 10:28:40.513704] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.919 [2024-07-12 10:28:40.513717] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:46.919 [2024-07-12 10:28:40.655932] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:47.853 00:14:47.853 real 0m4.104s 00:14:47.853 user 0m5.138s 00:14:47.853 sys 0m0.976s 00:14:47.853 10:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.853 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:47.853 ************************************ 00:14:47.853 END TEST raid_function_test_raid0 00:14:47.853 ************************************ 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:47.853 10:28:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.853 10:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.853 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:47.853 ************************************ 00:14:47.853 START TEST raid_function_test_concat 00:14:47.853 ************************************ 00:14:47.853 10:28:41 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@86 -- # raid_pid=114711 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114711' 00:14:47.853 Process raid pid: 114711 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114711 /var/tmp/spdk-raid.sock 00:14:47.853 10:28:41 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:47.853 10:28:41 -- common/autotest_common.sh@819 -- # '[' -z 114711 ']' 00:14:47.853 10:28:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:47.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:47.853 10:28:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:47.853 10:28:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:47.853 10:28:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:47.853 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:48.110 [2024-07-12 10:28:41.791220] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:48.110 [2024-07-12 10:28:41.791411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.110 [2024-07-12 10:28:41.955976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.368 [2024-07-12 10:28:42.133014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.626 [2024-07-12 10:28:42.321217] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.884 10:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.884 10:28:42 -- common/autotest_common.sh@852 -- # return 0 00:14:48.884 10:28:42 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:48.884 10:28:42 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:48.884 10:28:42 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:48.884 10:28:42 -- bdev/bdev_raid.sh@70 -- # cat 00:14:48.884 10:28:42 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:49.141 [2024-07-12 10:28:43.032450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:49.141 [2024-07-12 10:28:43.034368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:49.141 [2024-07-12 10:28:43.034445] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:49.141 [2024-07-12 10:28:43.034457] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:49.141 [2024-07-12 10:28:43.034591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:49.141 [2024-07-12 10:28:43.034895] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:49.141 [2024-07-12 10:28:43.034917] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:49.141 [2024-07-12 10:28:43.035065] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.141 Base_1 00:14:49.141 Base_2 00:14:49.141 10:28:43 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:49.141 10:28:43 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:49.141 10:28:43 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.398 10:28:43 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:49.398 10:28:43 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:49.398 10:28:43 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@12 -- # local i 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.398 10:28:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:49.655 [2024-07-12 10:28:43.496504] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:49.655 /dev/nbd0 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.655 10:28:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:49.655 10:28:43 -- common/autotest_common.sh@857 -- # local i 00:14:49.655 10:28:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:49.655 10:28:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:49.655 10:28:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:49.655 10:28:43 -- common/autotest_common.sh@861 -- # break 00:14:49.655 10:28:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:49.655 10:28:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:49.655 10:28:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.655 1+0 records in 00:14:49.655 1+0 records out 00:14:49.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298705 s, 13.7 MB/s 00:14:49.655 10:28:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.655 10:28:43 -- common/autotest_common.sh@874 -- # size=4096 00:14:49.655 10:28:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.655 10:28:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:49.655 10:28:43 -- common/autotest_common.sh@877 -- # return 0 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.655 10:28:43 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.655 10:28:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:49.913 { 00:14:49.913 "nbd_device": "/dev/nbd0", 00:14:49.913 "bdev_name": "raid" 00:14:49.913 } 00:14:49.913 ]' 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:49.913 { 00:14:49.913 "nbd_device": "/dev/nbd0", 00:14:49.913 "bdev_name": "raid" 00:14:49.913 } 00:14:49.913 ]' 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@65 -- # count=1 00:14:49.913 10:28:43 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:49.913 10:28:43 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:50.170 4096+0 records in 00:14:50.170 4096+0 records out 00:14:50.170 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0153009 s, 137 MB/s 00:14:50.170 10:28:43 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:50.170 4096+0 records in 00:14:50.170 4096+0 records out 00:14:50.170 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.249158 s, 8.4 MB/s 00:14:50.170 10:28:44 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:50.427 128+0 records in 00:14:50.427 128+0 records out 00:14:50.427 65536 bytes (66 kB, 64 KiB) copied, 0.000589022 s, 111 MB/s 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:50.427 2035+0 records in 00:14:50.427 2035+0 records out 00:14:50.427 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00658136 s, 158 MB/s 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:50.427 456+0 records in 00:14:50.427 456+0 records out 00:14:50.427 233472 bytes (233 kB, 228 KiB) copied, 0.00211947 s, 110 MB/s 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:50.427 10:28:44 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:50.427 10:28:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@51 -- # local i 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.428 10:28:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:50.428 [2024-07-12 10:28:44.341044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@41 -- # break 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.685 10:28:44 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.685 10:28:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@65 -- # true 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@65 -- # count=0 00:14:50.943 10:28:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:50.943 10:28:44 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:50.943 10:28:44 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:50.943 10:28:44 -- bdev/bdev_raid.sh@111 -- # killprocess 114711 00:14:50.943 10:28:44 -- common/autotest_common.sh@926 -- # '[' -z 114711 ']' 00:14:50.943 10:28:44 -- common/autotest_common.sh@930 -- # kill -0 114711 00:14:50.943 10:28:44 -- common/autotest_common.sh@931 -- # uname 00:14:50.943 10:28:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:50.944 10:28:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114711 00:14:50.944 10:28:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:50.944 10:28:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:50.944 10:28:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114711' 00:14:50.944 killing process with pid 114711 00:14:50.944 10:28:44 -- common/autotest_common.sh@945 -- # kill 114711 00:14:50.944 10:28:44 -- common/autotest_common.sh@950 -- # wait 114711 00:14:50.944 [2024-07-12 10:28:44.791609] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.944 [2024-07-12 10:28:44.791683] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.944 [2024-07-12 10:28:44.791724] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.944 [2024-07-12 10:28:44.791752] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:51.202 [2024-07-12 10:28:44.921120] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.137 10:28:45 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:52.137 00:14:52.137 real 0m4.204s 00:14:52.137 user 0m5.510s 00:14:52.137 sys 0m0.704s 00:14:52.137 10:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.137 10:28:45 -- common/autotest_common.sh@10 -- # set +x 00:14:52.137 ************************************ 00:14:52.138 END TEST raid_function_test_concat 00:14:52.138 ************************************ 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:52.138 10:28:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:52.138 10:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.138 10:28:45 -- common/autotest_common.sh@10 -- # set +x 00:14:52.138 ************************************ 00:14:52.138 START TEST raid0_resize_test 00:14:52.138 ************************************ 00:14:52.138 10:28:45 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@301 -- # raid_pid=114865 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 114865' 00:14:52.138 Process raid pid: 114865 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@303 -- # waitforlisten 114865 /var/tmp/spdk-raid.sock 00:14:52.138 10:28:45 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:52.138 10:28:45 -- common/autotest_common.sh@819 -- # '[' -z 114865 ']' 00:14:52.138 10:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.138 10:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.138 10:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.138 10:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.138 10:28:45 -- common/autotest_common.sh@10 -- # set +x 00:14:52.138 [2024-07-12 10:28:46.045074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:52.138 [2024-07-12 10:28:46.045271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.396 [2024-07-12 10:28:46.209022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.654 [2024-07-12 10:28:46.386775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.912 [2024-07-12 10:28:46.575082] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.170 10:28:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.170 10:28:46 -- common/autotest_common.sh@852 -- # return 0 00:14:53.170 10:28:46 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:53.429 Base_1 00:14:53.429 10:28:47 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:53.687 Base_2 00:14:53.687 10:28:47 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:53.687 [2024-07-12 10:28:47.537767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:53.687 [2024-07-12 10:28:47.539619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:53.687 [2024-07-12 10:28:47.539683] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:53.687 [2024-07-12 10:28:47.539695] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:53.687 [2024-07-12 10:28:47.539836] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:14:53.687 [2024-07-12 10:28:47.540114] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:53.687 [2024-07-12 10:28:47.540136] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:14:53.687 [2024-07-12 10:28:47.540281] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.687 10:28:47 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:53.945 [2024-07-12 10:28:47.773821] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:53.945 [2024-07-12 10:28:47.773847] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:53.945 true 00:14:53.945 10:28:47 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:53.945 10:28:47 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:54.203 [2024-07-12 10:28:47.949939] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.203 10:28:47 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:54.203 10:28:47 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:54.203 10:28:47 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:54.203 10:28:47 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:54.460 [2024-07-12 10:28:48.141820] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:54.460 [2024-07-12 10:28:48.141843] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:54.461 [2024-07-12 10:28:48.141878] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:54.461 [2024-07-12 10:28:48.141933] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:54.461 true 00:14:54.461 10:28:48 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:54.461 10:28:48 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:54.719 [2024-07-12 10:28:48.389968] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.719 10:28:48 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:54.719 10:28:48 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:54.719 10:28:48 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:54.719 10:28:48 -- bdev/bdev_raid.sh@332 -- # killprocess 114865 00:14:54.719 10:28:48 -- common/autotest_common.sh@926 -- # '[' -z 114865 ']' 00:14:54.719 10:28:48 -- common/autotest_common.sh@930 -- # kill -0 114865 00:14:54.719 10:28:48 -- common/autotest_common.sh@931 -- # uname 00:14:54.719 10:28:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.719 10:28:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114865 00:14:54.719 10:28:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.719 killing process with pid 114865 00:14:54.719 10:28:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.719 10:28:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114865' 00:14:54.719 10:28:48 -- common/autotest_common.sh@945 -- # kill 114865 00:14:54.719 10:28:48 -- common/autotest_common.sh@950 -- # wait 114865 00:14:54.719 [2024-07-12 10:28:48.413940] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.719 [2024-07-12 10:28:48.414004] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.719 [2024-07-12 10:28:48.414040] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.719 [2024-07-12 10:28:48.414049] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:14:54.719 [2024-07-12 10:28:48.414498] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:55.665 00:14:55.665 real 0m3.435s 00:14:55.665 user 0m4.782s 00:14:55.665 sys 0m0.515s 00:14:55.665 10:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.665 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:14:55.665 ************************************ 00:14:55.665 END TEST raid0_resize_test 00:14:55.665 ************************************ 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:55.665 10:28:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:55.665 10:28:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.665 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:14:55.665 ************************************ 00:14:55.665 START TEST raid_state_function_test 00:14:55.665 ************************************ 00:14:55.665 10:28:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=114954 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114954' 00:14:55.665 Process raid pid: 114954 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114954 /var/tmp/spdk-raid.sock 00:14:55.665 10:28:49 -- common/autotest_common.sh@819 -- # '[' -z 114954 ']' 00:14:55.665 10:28:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:55.665 10:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.665 10:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.665 10:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.665 10:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.665 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:14:55.665 [2024-07-12 10:28:49.540704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:55.665 [2024-07-12 10:28:49.540904] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.923 [2024-07-12 10:28:49.705372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.182 [2024-07-12 10:28:49.882365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.182 [2024-07-12 10:28:50.069671] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.748 10:28:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.748 10:28:50 -- common/autotest_common.sh@852 -- # return 0 00:14:56.748 10:28:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.006 [2024-07-12 10:28:50.693946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.006 [2024-07-12 10:28:50.694036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.006 [2024-07-12 10:28:50.694050] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.006 [2024-07-12 10:28:50.694071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.006 10:28:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.263 10:28:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.263 "name": "Existed_Raid", 00:14:57.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.263 "strip_size_kb": 64, 00:14:57.263 "state": "configuring", 00:14:57.263 "raid_level": "raid0", 00:14:57.263 "superblock": false, 00:14:57.263 "num_base_bdevs": 2, 00:14:57.263 "num_base_bdevs_discovered": 0, 00:14:57.263 "num_base_bdevs_operational": 2, 00:14:57.263 "base_bdevs_list": [ 00:14:57.263 { 00:14:57.263 "name": "BaseBdev1", 00:14:57.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.263 "is_configured": false, 00:14:57.264 "data_offset": 0, 00:14:57.264 "data_size": 0 00:14:57.264 }, 00:14:57.264 { 00:14:57.264 "name": "BaseBdev2", 00:14:57.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.264 "is_configured": false, 00:14:57.264 "data_offset": 0, 00:14:57.264 "data_size": 0 00:14:57.264 } 00:14:57.264 ] 00:14:57.264 }' 00:14:57.264 10:28:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.264 10:28:50 -- common/autotest_common.sh@10 -- # set +x 00:14:57.831 10:28:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.089 [2024-07-12 10:28:51.798003] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.089 [2024-07-12 10:28:51.798039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:58.089 10:28:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:58.347 [2024-07-12 10:28:52.042034] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.347 [2024-07-12 10:28:52.042102] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.347 [2024-07-12 10:28:52.042114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.347 [2024-07-12 10:28:52.042138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.347 10:28:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.347 [2024-07-12 10:28:52.259502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.347 BaseBdev1 00:14:58.605 10:28:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:58.605 10:28:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:58.605 10:28:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.605 10:28:52 -- common/autotest_common.sh@889 -- # local i 00:14:58.605 10:28:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.605 10:28:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.605 10:28:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.605 10:28:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.862 [ 00:14:58.862 { 00:14:58.862 "name": "BaseBdev1", 00:14:58.862 "aliases": [ 00:14:58.862 "7e283687-5f66-4a57-9f0a-aac570a99c0a" 00:14:58.862 ], 00:14:58.862 "product_name": "Malloc disk", 00:14:58.862 "block_size": 512, 00:14:58.862 "num_blocks": 65536, 00:14:58.862 "uuid": "7e283687-5f66-4a57-9f0a-aac570a99c0a", 00:14:58.862 "assigned_rate_limits": { 00:14:58.862 "rw_ios_per_sec": 0, 00:14:58.862 "rw_mbytes_per_sec": 0, 00:14:58.862 "r_mbytes_per_sec": 0, 00:14:58.862 "w_mbytes_per_sec": 0 00:14:58.862 }, 00:14:58.862 "claimed": true, 00:14:58.862 "claim_type": "exclusive_write", 00:14:58.862 "zoned": false, 00:14:58.862 "supported_io_types": { 00:14:58.862 "read": true, 00:14:58.862 "write": true, 00:14:58.862 "unmap": true, 00:14:58.862 "write_zeroes": true, 00:14:58.862 "flush": true, 00:14:58.862 "reset": true, 00:14:58.862 "compare": false, 00:14:58.862 "compare_and_write": false, 00:14:58.862 "abort": true, 00:14:58.862 "nvme_admin": false, 00:14:58.862 "nvme_io": false 00:14:58.862 }, 00:14:58.862 "memory_domains": [ 00:14:58.862 { 00:14:58.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.862 "dma_device_type": 2 00:14:58.862 } 00:14:58.862 ], 00:14:58.862 "driver_specific": {} 00:14:58.862 } 00:14:58.862 ] 00:14:58.862 10:28:52 -- common/autotest_common.sh@895 -- # return 0 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.862 10:28:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.120 10:28:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.120 "name": "Existed_Raid", 00:14:59.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.120 "strip_size_kb": 64, 00:14:59.120 "state": "configuring", 00:14:59.120 "raid_level": "raid0", 00:14:59.120 "superblock": false, 00:14:59.120 "num_base_bdevs": 2, 00:14:59.120 "num_base_bdevs_discovered": 1, 00:14:59.120 "num_base_bdevs_operational": 2, 00:14:59.120 "base_bdevs_list": [ 00:14:59.120 { 00:14:59.120 "name": "BaseBdev1", 00:14:59.120 "uuid": "7e283687-5f66-4a57-9f0a-aac570a99c0a", 00:14:59.120 "is_configured": true, 00:14:59.120 "data_offset": 0, 00:14:59.120 "data_size": 65536 00:14:59.120 }, 00:14:59.120 { 00:14:59.120 "name": "BaseBdev2", 00:14:59.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.120 "is_configured": false, 00:14:59.120 "data_offset": 0, 00:14:59.120 "data_size": 0 00:14:59.120 } 00:14:59.120 ] 00:14:59.120 }' 00:14:59.120 10:28:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.120 10:28:52 -- common/autotest_common.sh@10 -- # set +x 00:14:59.684 10:28:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.942 [2024-07-12 10:28:53.647717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.942 [2024-07-12 10:28:53.647761] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.942 [2024-07-12 10:28:53.827780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.942 [2024-07-12 10:28:53.829635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.942 [2024-07-12 10:28:53.829689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.942 10:28:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.200 10:28:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.200 "name": "Existed_Raid", 00:15:00.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.200 "strip_size_kb": 64, 00:15:00.200 "state": "configuring", 00:15:00.200 "raid_level": "raid0", 00:15:00.200 "superblock": false, 00:15:00.200 "num_base_bdevs": 2, 00:15:00.200 "num_base_bdevs_discovered": 1, 00:15:00.200 "num_base_bdevs_operational": 2, 00:15:00.200 "base_bdevs_list": [ 00:15:00.200 { 00:15:00.200 "name": "BaseBdev1", 00:15:00.200 "uuid": "7e283687-5f66-4a57-9f0a-aac570a99c0a", 00:15:00.200 "is_configured": true, 00:15:00.200 "data_offset": 0, 00:15:00.200 "data_size": 65536 00:15:00.200 }, 00:15:00.200 { 00:15:00.200 "name": "BaseBdev2", 00:15:00.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.200 "is_configured": false, 00:15:00.200 "data_offset": 0, 00:15:00.200 "data_size": 0 00:15:00.200 } 00:15:00.200 ] 00:15:00.200 }' 00:15:00.200 10:28:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.200 10:28:54 -- common/autotest_common.sh@10 -- # set +x 00:15:00.766 10:28:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.024 [2024-07-12 10:28:54.935660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.024 [2024-07-12 10:28:54.935697] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:01.024 [2024-07-12 10:28:54.935718] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:01.024 [2024-07-12 10:28:54.935841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:01.024 [2024-07-12 10:28:54.936183] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:01.024 [2024-07-12 10:28:54.936206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:01.024 BaseBdev2 00:15:01.024 [2024-07-12 10:28:54.936456] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.283 10:28:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:01.283 10:28:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:01.283 10:28:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.283 10:28:54 -- common/autotest_common.sh@889 -- # local i 00:15:01.283 10:28:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.283 10:28:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.283 10:28:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.283 10:28:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.541 [ 00:15:01.541 { 00:15:01.541 "name": "BaseBdev2", 00:15:01.541 "aliases": [ 00:15:01.541 "4b8eccad-7ce3-4922-8fb3-269d2e2aa8d2" 00:15:01.541 ], 00:15:01.541 "product_name": "Malloc disk", 00:15:01.541 "block_size": 512, 00:15:01.541 "num_blocks": 65536, 00:15:01.541 "uuid": "4b8eccad-7ce3-4922-8fb3-269d2e2aa8d2", 00:15:01.541 "assigned_rate_limits": { 00:15:01.541 "rw_ios_per_sec": 0, 00:15:01.541 "rw_mbytes_per_sec": 0, 00:15:01.541 "r_mbytes_per_sec": 0, 00:15:01.541 "w_mbytes_per_sec": 0 00:15:01.541 }, 00:15:01.541 "claimed": true, 00:15:01.541 "claim_type": "exclusive_write", 00:15:01.541 "zoned": false, 00:15:01.541 "supported_io_types": { 00:15:01.541 "read": true, 00:15:01.541 "write": true, 00:15:01.541 "unmap": true, 00:15:01.541 "write_zeroes": true, 00:15:01.541 "flush": true, 00:15:01.541 "reset": true, 00:15:01.541 "compare": false, 00:15:01.541 "compare_and_write": false, 00:15:01.541 "abort": true, 00:15:01.541 "nvme_admin": false, 00:15:01.541 "nvme_io": false 00:15:01.541 }, 00:15:01.541 "memory_domains": [ 00:15:01.541 { 00:15:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.541 "dma_device_type": 2 00:15:01.541 } 00:15:01.541 ], 00:15:01.541 "driver_specific": {} 00:15:01.541 } 00:15:01.541 ] 00:15:01.541 10:28:55 -- common/autotest_common.sh@895 -- # return 0 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.541 10:28:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.800 10:28:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.800 "name": "Existed_Raid", 00:15:01.800 "uuid": "58d2680a-99f1-40a7-bd0f-60fb03b67454", 00:15:01.800 "strip_size_kb": 64, 00:15:01.800 "state": "online", 00:15:01.800 "raid_level": "raid0", 00:15:01.800 "superblock": false, 00:15:01.800 "num_base_bdevs": 2, 00:15:01.800 "num_base_bdevs_discovered": 2, 00:15:01.800 "num_base_bdevs_operational": 2, 00:15:01.800 "base_bdevs_list": [ 00:15:01.800 { 00:15:01.800 "name": "BaseBdev1", 00:15:01.800 "uuid": "7e283687-5f66-4a57-9f0a-aac570a99c0a", 00:15:01.800 "is_configured": true, 00:15:01.800 "data_offset": 0, 00:15:01.800 "data_size": 65536 00:15:01.800 }, 00:15:01.800 { 00:15:01.800 "name": "BaseBdev2", 00:15:01.800 "uuid": "4b8eccad-7ce3-4922-8fb3-269d2e2aa8d2", 00:15:01.800 "is_configured": true, 00:15:01.800 "data_offset": 0, 00:15:01.800 "data_size": 65536 00:15:01.800 } 00:15:01.800 ] 00:15:01.800 }' 00:15:01.800 10:28:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.800 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:15:02.366 10:28:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:02.624 [2024-07-12 10:28:56.443956] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.624 [2024-07-12 10:28:56.443981] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.625 [2024-07-12 10:28:56.444046] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.625 10:28:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.882 10:28:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.882 "name": "Existed_Raid", 00:15:02.882 "uuid": "58d2680a-99f1-40a7-bd0f-60fb03b67454", 00:15:02.882 "strip_size_kb": 64, 00:15:02.882 "state": "offline", 00:15:02.882 "raid_level": "raid0", 00:15:02.882 "superblock": false, 00:15:02.882 "num_base_bdevs": 2, 00:15:02.882 "num_base_bdevs_discovered": 1, 00:15:02.882 "num_base_bdevs_operational": 1, 00:15:02.882 "base_bdevs_list": [ 00:15:02.882 { 00:15:02.882 "name": null, 00:15:02.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.882 "is_configured": false, 00:15:02.882 "data_offset": 0, 00:15:02.882 "data_size": 65536 00:15:02.882 }, 00:15:02.882 { 00:15:02.882 "name": "BaseBdev2", 00:15:02.882 "uuid": "4b8eccad-7ce3-4922-8fb3-269d2e2aa8d2", 00:15:02.882 "is_configured": true, 00:15:02.882 "data_offset": 0, 00:15:02.882 "data_size": 65536 00:15:02.882 } 00:15:02.882 ] 00:15:02.882 }' 00:15:02.882 10:28:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.882 10:28:56 -- common/autotest_common.sh@10 -- # set +x 00:15:03.445 10:28:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:03.445 10:28:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:03.445 10:28:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.445 10:28:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:03.702 10:28:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:03.702 10:28:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.702 10:28:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:03.960 [2024-07-12 10:28:57.730461] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.960 [2024-07-12 10:28:57.730528] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:03.960 10:28:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:03.960 10:28:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:03.960 10:28:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.960 10:28:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.218 10:28:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:04.218 10:28:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:04.218 10:28:57 -- bdev/bdev_raid.sh@287 -- # killprocess 114954 00:15:04.218 10:28:57 -- common/autotest_common.sh@926 -- # '[' -z 114954 ']' 00:15:04.218 10:28:57 -- common/autotest_common.sh@930 -- # kill -0 114954 00:15:04.218 10:28:57 -- common/autotest_common.sh@931 -- # uname 00:15:04.218 10:28:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.218 10:28:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114954 00:15:04.218 killing process with pid 114954 00:15:04.218 10:28:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.218 10:28:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.218 10:28:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114954' 00:15:04.218 10:28:58 -- common/autotest_common.sh@945 -- # kill 114954 00:15:04.218 10:28:58 -- common/autotest_common.sh@950 -- # wait 114954 00:15:04.218 [2024-07-12 10:28:58.017731] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.218 [2024-07-12 10:28:58.017841] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.151 ************************************ 00:15:05.151 END TEST raid_state_function_test 00:15:05.151 ************************************ 00:15:05.151 10:28:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:05.151 00:15:05.151 real 0m9.555s 00:15:05.151 user 0m16.815s 00:15:05.151 sys 0m1.005s 00:15:05.151 10:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.151 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:05.408 10:28:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:05.408 10:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.408 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:15:05.408 ************************************ 00:15:05.408 START TEST raid_state_function_test_sb 00:15:05.408 ************************************ 00:15:05.408 10:28:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=115282 00:15:05.408 Process raid pid: 115282 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115282' 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115282 /var/tmp/spdk-raid.sock 00:15:05.408 10:28:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:05.408 10:28:59 -- common/autotest_common.sh@819 -- # '[' -z 115282 ']' 00:15:05.408 10:28:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:05.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:05.408 10:28:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.408 10:28:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:05.409 10:28:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.409 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:15:05.409 [2024-07-12 10:28:59.157380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:05.409 [2024-07-12 10:28:59.158220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.409 [2024-07-12 10:28:59.324844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.667 [2024-07-12 10:28:59.499764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.925 [2024-07-12 10:28:59.687607] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.183 10:29:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.183 10:29:00 -- common/autotest_common.sh@852 -- # return 0 00:15:06.183 10:29:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:06.441 [2024-07-12 10:29:00.250721] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.441 [2024-07-12 10:29:00.250820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.441 [2024-07-12 10:29:00.250834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.442 [2024-07-12 10:29:00.250855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.442 10:29:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.700 10:29:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.700 "name": "Existed_Raid", 00:15:06.700 "uuid": "e020f7b1-54a3-43fa-9ef3-40b031d62019", 00:15:06.700 "strip_size_kb": 64, 00:15:06.700 "state": "configuring", 00:15:06.700 "raid_level": "raid0", 00:15:06.700 "superblock": true, 00:15:06.700 "num_base_bdevs": 2, 00:15:06.700 "num_base_bdevs_discovered": 0, 00:15:06.700 "num_base_bdevs_operational": 2, 00:15:06.700 "base_bdevs_list": [ 00:15:06.700 { 00:15:06.700 "name": "BaseBdev1", 00:15:06.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.700 "is_configured": false, 00:15:06.700 "data_offset": 0, 00:15:06.700 "data_size": 0 00:15:06.700 }, 00:15:06.700 { 00:15:06.700 "name": "BaseBdev2", 00:15:06.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.700 "is_configured": false, 00:15:06.700 "data_offset": 0, 00:15:06.700 "data_size": 0 00:15:06.700 } 00:15:06.700 ] 00:15:06.700 }' 00:15:06.700 10:29:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.700 10:29:00 -- common/autotest_common.sh@10 -- # set +x 00:15:07.266 10:29:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:07.524 [2024-07-12 10:29:01.370749] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.524 [2024-07-12 10:29:01.370779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:07.524 10:29:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.782 [2024-07-12 10:29:01.554826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.782 [2024-07-12 10:29:01.554893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.783 [2024-07-12 10:29:01.554905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.783 [2024-07-12 10:29:01.554929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.783 10:29:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.041 [2024-07-12 10:29:01.760321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.041 BaseBdev1 00:15:08.041 10:29:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:08.041 10:29:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:08.041 10:29:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:08.041 10:29:01 -- common/autotest_common.sh@889 -- # local i 00:15:08.041 10:29:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:08.041 10:29:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:08.041 10:29:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.041 10:29:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.300 [ 00:15:08.300 { 00:15:08.300 "name": "BaseBdev1", 00:15:08.300 "aliases": [ 00:15:08.300 "09732f5d-0590-4b1d-a7cb-da22e6ecb56f" 00:15:08.300 ], 00:15:08.300 "product_name": "Malloc disk", 00:15:08.300 "block_size": 512, 00:15:08.300 "num_blocks": 65536, 00:15:08.300 "uuid": "09732f5d-0590-4b1d-a7cb-da22e6ecb56f", 00:15:08.300 "assigned_rate_limits": { 00:15:08.300 "rw_ios_per_sec": 0, 00:15:08.300 "rw_mbytes_per_sec": 0, 00:15:08.300 "r_mbytes_per_sec": 0, 00:15:08.300 "w_mbytes_per_sec": 0 00:15:08.300 }, 00:15:08.300 "claimed": true, 00:15:08.300 "claim_type": "exclusive_write", 00:15:08.300 "zoned": false, 00:15:08.300 "supported_io_types": { 00:15:08.300 "read": true, 00:15:08.300 "write": true, 00:15:08.300 "unmap": true, 00:15:08.300 "write_zeroes": true, 00:15:08.300 "flush": true, 00:15:08.300 "reset": true, 00:15:08.300 "compare": false, 00:15:08.300 "compare_and_write": false, 00:15:08.300 "abort": true, 00:15:08.300 "nvme_admin": false, 00:15:08.300 "nvme_io": false 00:15:08.300 }, 00:15:08.300 "memory_domains": [ 00:15:08.300 { 00:15:08.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.300 "dma_device_type": 2 00:15:08.300 } 00:15:08.300 ], 00:15:08.300 "driver_specific": {} 00:15:08.300 } 00:15:08.300 ] 00:15:08.300 10:29:02 -- common/autotest_common.sh@895 -- # return 0 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.300 10:29:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.559 10:29:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.559 "name": "Existed_Raid", 00:15:08.559 "uuid": "8bd23ca4-c93f-44a4-a7ef-18690d4d9d8c", 00:15:08.559 "strip_size_kb": 64, 00:15:08.559 "state": "configuring", 00:15:08.559 "raid_level": "raid0", 00:15:08.559 "superblock": true, 00:15:08.559 "num_base_bdevs": 2, 00:15:08.559 "num_base_bdevs_discovered": 1, 00:15:08.559 "num_base_bdevs_operational": 2, 00:15:08.559 "base_bdevs_list": [ 00:15:08.559 { 00:15:08.559 "name": "BaseBdev1", 00:15:08.559 "uuid": "09732f5d-0590-4b1d-a7cb-da22e6ecb56f", 00:15:08.559 "is_configured": true, 00:15:08.559 "data_offset": 2048, 00:15:08.559 "data_size": 63488 00:15:08.559 }, 00:15:08.559 { 00:15:08.559 "name": "BaseBdev2", 00:15:08.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.559 "is_configured": false, 00:15:08.559 "data_offset": 0, 00:15:08.559 "data_size": 0 00:15:08.559 } 00:15:08.559 ] 00:15:08.559 }' 00:15:08.559 10:29:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.559 10:29:02 -- common/autotest_common.sh@10 -- # set +x 00:15:09.124 10:29:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.381 [2024-07-12 10:29:03.100535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.381 [2024-07-12 10:29:03.100570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:09.382 10:29:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:09.382 10:29:03 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:09.640 10:29:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.898 BaseBdev1 00:15:09.898 10:29:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:09.898 10:29:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:09.898 10:29:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.898 10:29:03 -- common/autotest_common.sh@889 -- # local i 00:15:09.898 10:29:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.898 10:29:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.898 10:29:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.898 10:29:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.162 [ 00:15:10.162 { 00:15:10.162 "name": "BaseBdev1", 00:15:10.162 "aliases": [ 00:15:10.162 "60fc7014-746e-463f-9159-e9f20100836f" 00:15:10.162 ], 00:15:10.162 "product_name": "Malloc disk", 00:15:10.162 "block_size": 512, 00:15:10.162 "num_blocks": 65536, 00:15:10.162 "uuid": "60fc7014-746e-463f-9159-e9f20100836f", 00:15:10.162 "assigned_rate_limits": { 00:15:10.162 "rw_ios_per_sec": 0, 00:15:10.162 "rw_mbytes_per_sec": 0, 00:15:10.162 "r_mbytes_per_sec": 0, 00:15:10.162 "w_mbytes_per_sec": 0 00:15:10.162 }, 00:15:10.162 "claimed": false, 00:15:10.162 "zoned": false, 00:15:10.162 "supported_io_types": { 00:15:10.162 "read": true, 00:15:10.162 "write": true, 00:15:10.162 "unmap": true, 00:15:10.162 "write_zeroes": true, 00:15:10.162 "flush": true, 00:15:10.162 "reset": true, 00:15:10.162 "compare": false, 00:15:10.162 "compare_and_write": false, 00:15:10.162 "abort": true, 00:15:10.162 "nvme_admin": false, 00:15:10.162 "nvme_io": false 00:15:10.162 }, 00:15:10.162 "memory_domains": [ 00:15:10.162 { 00:15:10.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.162 "dma_device_type": 2 00:15:10.162 } 00:15:10.162 ], 00:15:10.162 "driver_specific": {} 00:15:10.162 } 00:15:10.162 ] 00:15:10.162 10:29:03 -- common/autotest_common.sh@895 -- # return 0 00:15:10.162 10:29:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.420 [2024-07-12 10:29:04.134407] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.420 [2024-07-12 10:29:04.136266] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.420 [2024-07-12 10:29:04.136333] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.420 10:29:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.678 10:29:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.678 "name": "Existed_Raid", 00:15:10.678 "uuid": "3738ed2e-8a03-40f6-9466-17b718184d9e", 00:15:10.678 "strip_size_kb": 64, 00:15:10.678 "state": "configuring", 00:15:10.678 "raid_level": "raid0", 00:15:10.678 "superblock": true, 00:15:10.678 "num_base_bdevs": 2, 00:15:10.678 "num_base_bdevs_discovered": 1, 00:15:10.678 "num_base_bdevs_operational": 2, 00:15:10.678 "base_bdevs_list": [ 00:15:10.678 { 00:15:10.678 "name": "BaseBdev1", 00:15:10.678 "uuid": "60fc7014-746e-463f-9159-e9f20100836f", 00:15:10.678 "is_configured": true, 00:15:10.678 "data_offset": 2048, 00:15:10.678 "data_size": 63488 00:15:10.678 }, 00:15:10.678 { 00:15:10.678 "name": "BaseBdev2", 00:15:10.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.678 "is_configured": false, 00:15:10.678 "data_offset": 0, 00:15:10.678 "data_size": 0 00:15:10.678 } 00:15:10.678 ] 00:15:10.678 }' 00:15:10.678 10:29:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.678 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:15:11.243 10:29:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.501 [2024-07-12 10:29:05.189648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.501 [2024-07-12 10:29:05.189860] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:11.501 [2024-07-12 10:29:05.189875] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.501 [2024-07-12 10:29:05.189989] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:11.501 BaseBdev2 00:15:11.501 [2024-07-12 10:29:05.190369] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:11.501 [2024-07-12 10:29:05.190390] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:11.501 [2024-07-12 10:29:05.190526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.501 10:29:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:11.501 10:29:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:11.501 10:29:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:11.501 10:29:05 -- common/autotest_common.sh@889 -- # local i 00:15:11.501 10:29:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:11.501 10:29:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:11.501 10:29:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.501 10:29:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.759 [ 00:15:11.759 { 00:15:11.759 "name": "BaseBdev2", 00:15:11.759 "aliases": [ 00:15:11.759 "f8770664-8d7c-43b9-a601-f880c33d1f5e" 00:15:11.759 ], 00:15:11.759 "product_name": "Malloc disk", 00:15:11.759 "block_size": 512, 00:15:11.759 "num_blocks": 65536, 00:15:11.759 "uuid": "f8770664-8d7c-43b9-a601-f880c33d1f5e", 00:15:11.759 "assigned_rate_limits": { 00:15:11.759 "rw_ios_per_sec": 0, 00:15:11.759 "rw_mbytes_per_sec": 0, 00:15:11.759 "r_mbytes_per_sec": 0, 00:15:11.759 "w_mbytes_per_sec": 0 00:15:11.759 }, 00:15:11.759 "claimed": true, 00:15:11.759 "claim_type": "exclusive_write", 00:15:11.759 "zoned": false, 00:15:11.759 "supported_io_types": { 00:15:11.759 "read": true, 00:15:11.759 "write": true, 00:15:11.759 "unmap": true, 00:15:11.759 "write_zeroes": true, 00:15:11.759 "flush": true, 00:15:11.759 "reset": true, 00:15:11.759 "compare": false, 00:15:11.759 "compare_and_write": false, 00:15:11.759 "abort": true, 00:15:11.759 "nvme_admin": false, 00:15:11.759 "nvme_io": false 00:15:11.759 }, 00:15:11.759 "memory_domains": [ 00:15:11.759 { 00:15:11.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.759 "dma_device_type": 2 00:15:11.759 } 00:15:11.759 ], 00:15:11.759 "driver_specific": {} 00:15:11.759 } 00:15:11.759 ] 00:15:11.759 10:29:05 -- common/autotest_common.sh@895 -- # return 0 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.759 10:29:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.017 10:29:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.017 "name": "Existed_Raid", 00:15:12.017 "uuid": "3738ed2e-8a03-40f6-9466-17b718184d9e", 00:15:12.017 "strip_size_kb": 64, 00:15:12.017 "state": "online", 00:15:12.017 "raid_level": "raid0", 00:15:12.017 "superblock": true, 00:15:12.017 "num_base_bdevs": 2, 00:15:12.017 "num_base_bdevs_discovered": 2, 00:15:12.017 "num_base_bdevs_operational": 2, 00:15:12.017 "base_bdevs_list": [ 00:15:12.017 { 00:15:12.017 "name": "BaseBdev1", 00:15:12.017 "uuid": "60fc7014-746e-463f-9159-e9f20100836f", 00:15:12.017 "is_configured": true, 00:15:12.017 "data_offset": 2048, 00:15:12.017 "data_size": 63488 00:15:12.017 }, 00:15:12.017 { 00:15:12.017 "name": "BaseBdev2", 00:15:12.017 "uuid": "f8770664-8d7c-43b9-a601-f880c33d1f5e", 00:15:12.017 "is_configured": true, 00:15:12.017 "data_offset": 2048, 00:15:12.017 "data_size": 63488 00:15:12.017 } 00:15:12.017 ] 00:15:12.017 }' 00:15:12.017 10:29:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.017 10:29:05 -- common/autotest_common.sh@10 -- # set +x 00:15:12.583 10:29:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:12.841 [2024-07-12 10:29:06.529922] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.841 [2024-07-12 10:29:06.529947] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.841 [2024-07-12 10:29:06.530015] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.841 10:29:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.098 10:29:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.098 "name": "Existed_Raid", 00:15:13.098 "uuid": "3738ed2e-8a03-40f6-9466-17b718184d9e", 00:15:13.098 "strip_size_kb": 64, 00:15:13.098 "state": "offline", 00:15:13.098 "raid_level": "raid0", 00:15:13.098 "superblock": true, 00:15:13.098 "num_base_bdevs": 2, 00:15:13.098 "num_base_bdevs_discovered": 1, 00:15:13.098 "num_base_bdevs_operational": 1, 00:15:13.098 "base_bdevs_list": [ 00:15:13.098 { 00:15:13.098 "name": null, 00:15:13.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.098 "is_configured": false, 00:15:13.098 "data_offset": 2048, 00:15:13.098 "data_size": 63488 00:15:13.098 }, 00:15:13.098 { 00:15:13.098 "name": "BaseBdev2", 00:15:13.098 "uuid": "f8770664-8d7c-43b9-a601-f880c33d1f5e", 00:15:13.098 "is_configured": true, 00:15:13.098 "data_offset": 2048, 00:15:13.098 "data_size": 63488 00:15:13.098 } 00:15:13.098 ] 00:15:13.098 }' 00:15:13.098 10:29:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.098 10:29:06 -- common/autotest_common.sh@10 -- # set +x 00:15:13.664 10:29:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:13.664 10:29:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:13.664 10:29:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.664 10:29:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:13.922 10:29:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:13.922 10:29:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.922 10:29:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:14.180 [2024-07-12 10:29:07.891897] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.180 [2024-07-12 10:29:07.891997] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:14.180 10:29:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:14.180 10:29:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:14.180 10:29:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.180 10:29:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.438 10:29:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:14.438 10:29:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:14.438 10:29:08 -- bdev/bdev_raid.sh@287 -- # killprocess 115282 00:15:14.438 10:29:08 -- common/autotest_common.sh@926 -- # '[' -z 115282 ']' 00:15:14.438 10:29:08 -- common/autotest_common.sh@930 -- # kill -0 115282 00:15:14.438 10:29:08 -- common/autotest_common.sh@931 -- # uname 00:15:14.438 10:29:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.438 10:29:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115282 00:15:14.438 killing process with pid 115282 00:15:14.438 10:29:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.438 10:29:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.438 10:29:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115282' 00:15:14.438 10:29:08 -- common/autotest_common.sh@945 -- # kill 115282 00:15:14.438 10:29:08 -- common/autotest_common.sh@950 -- # wait 115282 00:15:14.438 [2024-07-12 10:29:08.233719] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.438 [2024-07-12 10:29:08.233819] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.370 ************************************ 00:15:15.370 END TEST raid_state_function_test_sb 00:15:15.370 ************************************ 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:15.371 00:15:15.371 real 0m10.049s 00:15:15.371 user 0m17.729s 00:15:15.371 sys 0m1.004s 00:15:15.371 10:29:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.371 10:29:09 -- common/autotest_common.sh@10 -- # set +x 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:15.371 10:29:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:15.371 10:29:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:15.371 10:29:09 -- common/autotest_common.sh@10 -- # set +x 00:15:15.371 ************************************ 00:15:15.371 START TEST raid_superblock_test 00:15:15.371 ************************************ 00:15:15.371 10:29:09 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=115627 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115627 /var/tmp/spdk-raid.sock 00:15:15.371 10:29:09 -- common/autotest_common.sh@819 -- # '[' -z 115627 ']' 00:15:15.371 10:29:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:15.371 10:29:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:15.371 10:29:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:15.371 10:29:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:15.371 10:29:09 -- common/autotest_common.sh@10 -- # set +x 00:15:15.371 10:29:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:15.371 [2024-07-12 10:29:09.257559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:15.371 [2024-07-12 10:29:09.257916] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115627 ] 00:15:15.628 [2024-07-12 10:29:09.427762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.886 [2024-07-12 10:29:09.652737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.144 [2024-07-12 10:29:09.839151] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.402 10:29:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.402 10:29:10 -- common/autotest_common.sh@852 -- # return 0 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.402 10:29:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:16.660 malloc1 00:15:16.660 10:29:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.660 [2024-07-12 10:29:10.565302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.660 [2024-07-12 10:29:10.565400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.660 [2024-07-12 10:29:10.565434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:16.660 [2024-07-12 10:29:10.565480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.660 [2024-07-12 10:29:10.567714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.660 [2024-07-12 10:29:10.567771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.660 pt1 00:15:16.660 10:29:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.661 10:29:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:17.226 malloc2 00:15:17.226 10:29:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.227 [2024-07-12 10:29:11.087140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.227 [2024-07-12 10:29:11.087213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.227 [2024-07-12 10:29:11.087254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:17.227 [2024-07-12 10:29:11.087308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.227 [2024-07-12 10:29:11.089096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.227 [2024-07-12 10:29:11.089141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.227 pt2 00:15:17.227 10:29:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:17.227 10:29:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.227 10:29:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:17.485 [2024-07-12 10:29:11.323228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.485 [2024-07-12 10:29:11.325137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.485 [2024-07-12 10:29:11.325339] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:17.485 [2024-07-12 10:29:11.325359] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:17.485 [2024-07-12 10:29:11.325467] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:17.485 [2024-07-12 10:29:11.325812] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:17.485 [2024-07-12 10:29:11.325833] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:17.485 [2024-07-12 10:29:11.325995] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.485 10:29:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.744 10:29:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.744 "name": "raid_bdev1", 00:15:17.744 "uuid": "0f570d2b-62e5-4895-85b8-07953402569a", 00:15:17.744 "strip_size_kb": 64, 00:15:17.744 "state": "online", 00:15:17.744 "raid_level": "raid0", 00:15:17.744 "superblock": true, 00:15:17.744 "num_base_bdevs": 2, 00:15:17.744 "num_base_bdevs_discovered": 2, 00:15:17.744 "num_base_bdevs_operational": 2, 00:15:17.744 "base_bdevs_list": [ 00:15:17.744 { 00:15:17.744 "name": "pt1", 00:15:17.744 "uuid": "bb24da82-9643-5694-a756-89b82e9f9bda", 00:15:17.744 "is_configured": true, 00:15:17.744 "data_offset": 2048, 00:15:17.744 "data_size": 63488 00:15:17.744 }, 00:15:17.744 { 00:15:17.744 "name": "pt2", 00:15:17.744 "uuid": "d92f737b-d5d0-5976-b300-825c1a471662", 00:15:17.744 "is_configured": true, 00:15:17.744 "data_offset": 2048, 00:15:17.744 "data_size": 63488 00:15:17.744 } 00:15:17.744 ] 00:15:17.744 }' 00:15:17.744 10:29:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.744 10:29:11 -- common/autotest_common.sh@10 -- # set +x 00:15:18.310 10:29:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:18.310 10:29:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:18.569 [2024-07-12 10:29:12.331532] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.569 10:29:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0f570d2b-62e5-4895-85b8-07953402569a 00:15:18.569 10:29:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 0f570d2b-62e5-4895-85b8-07953402569a ']' 00:15:18.569 10:29:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.828 [2024-07-12 10:29:12.575389] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.828 [2024-07-12 10:29:12.575408] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.828 [2024-07-12 10:29:12.575465] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.828 [2024-07-12 10:29:12.575513] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.828 [2024-07-12 10:29:12.575526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:18.828 10:29:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.828 10:29:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.088 10:29:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:19.345 10:29:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:19.345 10:29:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:19.603 10:29:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:19.603 10:29:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.603 10:29:13 -- common/autotest_common.sh@640 -- # local es=0 00:15:19.603 10:29:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.603 10:29:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.603 10:29:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.603 10:29:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.603 10:29:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.603 10:29:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.603 10:29:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.603 10:29:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.603 10:29:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:19.603 10:29:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.862 [2024-07-12 10:29:13.567554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:19.862 [2024-07-12 10:29:13.569375] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:19.862 [2024-07-12 10:29:13.569434] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:19.862 [2024-07-12 10:29:13.569489] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:19.862 [2024-07-12 10:29:13.569524] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.862 [2024-07-12 10:29:13.569533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:19.862 request: 00:15:19.862 { 00:15:19.862 "name": "raid_bdev1", 00:15:19.862 "raid_level": "raid0", 00:15:19.862 "base_bdevs": [ 00:15:19.862 "malloc1", 00:15:19.862 "malloc2" 00:15:19.862 ], 00:15:19.862 "superblock": false, 00:15:19.862 "strip_size_kb": 64, 00:15:19.862 "method": "bdev_raid_create", 00:15:19.862 "req_id": 1 00:15:19.862 } 00:15:19.862 Got JSON-RPC error response 00:15:19.862 response: 00:15:19.862 { 00:15:19.862 "code": -17, 00:15:19.862 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:19.862 } 00:15:19.862 10:29:13 -- common/autotest_common.sh@643 -- # es=1 00:15:19.862 10:29:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:19.862 10:29:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:19.862 10:29:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:19.862 10:29:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.862 10:29:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:19.862 10:29:13 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:19.862 10:29:13 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:19.862 10:29:13 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.121 [2024-07-12 10:29:13.923573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.121 [2024-07-12 10:29:13.923646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.121 [2024-07-12 10:29:13.923676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.121 [2024-07-12 10:29:13.923699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.121 [2024-07-12 10:29:13.925799] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.121 [2024-07-12 10:29:13.925855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.121 [2024-07-12 10:29:13.925977] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:20.121 [2024-07-12 10:29:13.926039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.121 pt1 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.121 10:29:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.380 10:29:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.380 "name": "raid_bdev1", 00:15:20.380 "uuid": "0f570d2b-62e5-4895-85b8-07953402569a", 00:15:20.380 "strip_size_kb": 64, 00:15:20.380 "state": "configuring", 00:15:20.380 "raid_level": "raid0", 00:15:20.380 "superblock": true, 00:15:20.380 "num_base_bdevs": 2, 00:15:20.380 "num_base_bdevs_discovered": 1, 00:15:20.380 "num_base_bdevs_operational": 2, 00:15:20.380 "base_bdevs_list": [ 00:15:20.380 { 00:15:20.380 "name": "pt1", 00:15:20.380 "uuid": "bb24da82-9643-5694-a756-89b82e9f9bda", 00:15:20.380 "is_configured": true, 00:15:20.380 "data_offset": 2048, 00:15:20.380 "data_size": 63488 00:15:20.380 }, 00:15:20.380 { 00:15:20.380 "name": null, 00:15:20.380 "uuid": "d92f737b-d5d0-5976-b300-825c1a471662", 00:15:20.380 "is_configured": false, 00:15:20.380 "data_offset": 2048, 00:15:20.380 "data_size": 63488 00:15:20.380 } 00:15:20.380 ] 00:15:20.380 }' 00:15:20.380 10:29:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.380 10:29:14 -- common/autotest_common.sh@10 -- # set +x 00:15:20.946 10:29:14 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:20.946 10:29:14 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:20.947 10:29:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:20.947 10:29:14 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.205 [2024-07-12 10:29:14.967755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.205 [2024-07-12 10:29:14.967826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.205 [2024-07-12 10:29:14.967857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:21.205 [2024-07-12 10:29:14.967881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.205 [2024-07-12 10:29:14.968236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.205 [2024-07-12 10:29:14.968279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.205 [2024-07-12 10:29:14.968357] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:21.205 [2024-07-12 10:29:14.968379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.205 [2024-07-12 10:29:14.968474] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:21.205 [2024-07-12 10:29:14.968494] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:21.205 [2024-07-12 10:29:14.968600] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:21.205 [2024-07-12 10:29:14.968877] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:21.205 [2024-07-12 10:29:14.968897] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:21.206 [2024-07-12 10:29:14.969007] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.206 pt2 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.206 10:29:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.464 10:29:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.464 "name": "raid_bdev1", 00:15:21.464 "uuid": "0f570d2b-62e5-4895-85b8-07953402569a", 00:15:21.464 "strip_size_kb": 64, 00:15:21.464 "state": "online", 00:15:21.464 "raid_level": "raid0", 00:15:21.464 "superblock": true, 00:15:21.464 "num_base_bdevs": 2, 00:15:21.464 "num_base_bdevs_discovered": 2, 00:15:21.464 "num_base_bdevs_operational": 2, 00:15:21.464 "base_bdevs_list": [ 00:15:21.464 { 00:15:21.464 "name": "pt1", 00:15:21.464 "uuid": "bb24da82-9643-5694-a756-89b82e9f9bda", 00:15:21.464 "is_configured": true, 00:15:21.464 "data_offset": 2048, 00:15:21.464 "data_size": 63488 00:15:21.464 }, 00:15:21.464 { 00:15:21.464 "name": "pt2", 00:15:21.464 "uuid": "d92f737b-d5d0-5976-b300-825c1a471662", 00:15:21.464 "is_configured": true, 00:15:21.464 "data_offset": 2048, 00:15:21.464 "data_size": 63488 00:15:21.464 } 00:15:21.464 ] 00:15:21.464 }' 00:15:21.464 10:29:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.464 10:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:22.030 10:29:15 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:22.030 10:29:15 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:22.289 [2024-07-12 10:29:16.168117] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.289 10:29:16 -- bdev/bdev_raid.sh@430 -- # '[' 0f570d2b-62e5-4895-85b8-07953402569a '!=' 0f570d2b-62e5-4895-85b8-07953402569a ']' 00:15:22.289 10:29:16 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:22.289 10:29:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:22.289 10:29:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:22.289 10:29:16 -- bdev/bdev_raid.sh@511 -- # killprocess 115627 00:15:22.289 10:29:16 -- common/autotest_common.sh@926 -- # '[' -z 115627 ']' 00:15:22.289 10:29:16 -- common/autotest_common.sh@930 -- # kill -0 115627 00:15:22.289 10:29:16 -- common/autotest_common.sh@931 -- # uname 00:15:22.289 10:29:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:22.289 10:29:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115627 00:15:22.289 killing process with pid 115627 00:15:22.289 10:29:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:22.289 10:29:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:22.289 10:29:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115627' 00:15:22.289 10:29:16 -- common/autotest_common.sh@945 -- # kill 115627 00:15:22.289 10:29:16 -- common/autotest_common.sh@950 -- # wait 115627 00:15:22.289 [2024-07-12 10:29:16.199870] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.289 [2024-07-12 10:29:16.199978] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.289 [2024-07-12 10:29:16.200046] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.289 [2024-07-12 10:29:16.200072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:22.548 [2024-07-12 10:29:16.401569] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.923 ************************************ 00:15:23.923 END TEST raid_superblock_test 00:15:23.923 ************************************ 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:23.923 00:15:23.923 real 0m8.212s 00:15:23.923 user 0m13.931s 00:15:23.923 sys 0m1.016s 00:15:23.923 10:29:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.923 10:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:23.923 10:29:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:23.923 10:29:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:23.923 10:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:23.923 ************************************ 00:15:23.923 START TEST raid_state_function_test 00:15:23.923 ************************************ 00:15:23.923 10:29:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=115894 00:15:23.923 Process raid pid: 115894 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115894' 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115894 /var/tmp/spdk-raid.sock 00:15:23.923 10:29:17 -- common/autotest_common.sh@819 -- # '[' -z 115894 ']' 00:15:23.923 10:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.923 10:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:23.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.923 10:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.923 10:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:23.923 10:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:23.923 10:29:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:23.923 [2024-07-12 10:29:17.521858] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:23.923 [2024-07-12 10:29:17.522199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.923 [2024-07-12 10:29:17.676809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.181 [2024-07-12 10:29:17.859522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.181 [2024-07-12 10:29:18.046957] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.755 10:29:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:24.755 10:29:18 -- common/autotest_common.sh@852 -- # return 0 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:24.755 [2024-07-12 10:29:18.630798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.755 [2024-07-12 10:29:18.630999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.755 [2024-07-12 10:29:18.631097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.755 [2024-07-12 10:29:18.631154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.755 10:29:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.077 10:29:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.077 "name": "Existed_Raid", 00:15:25.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.077 "strip_size_kb": 64, 00:15:25.077 "state": "configuring", 00:15:25.077 "raid_level": "concat", 00:15:25.077 "superblock": false, 00:15:25.077 "num_base_bdevs": 2, 00:15:25.077 "num_base_bdevs_discovered": 0, 00:15:25.077 "num_base_bdevs_operational": 2, 00:15:25.077 "base_bdevs_list": [ 00:15:25.077 { 00:15:25.077 "name": "BaseBdev1", 00:15:25.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.077 "is_configured": false, 00:15:25.077 "data_offset": 0, 00:15:25.077 "data_size": 0 00:15:25.077 }, 00:15:25.077 { 00:15:25.077 "name": "BaseBdev2", 00:15:25.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.077 "is_configured": false, 00:15:25.077 "data_offset": 0, 00:15:25.077 "data_size": 0 00:15:25.077 } 00:15:25.077 ] 00:15:25.077 }' 00:15:25.077 10:29:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.077 10:29:18 -- common/autotest_common.sh@10 -- # set +x 00:15:25.695 10:29:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.953 [2024-07-12 10:29:19.738824] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.953 [2024-07-12 10:29:19.738963] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:25.953 10:29:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:26.211 [2024-07-12 10:29:19.994893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.211 [2024-07-12 10:29:19.995074] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.211 [2024-07-12 10:29:19.995169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.211 [2024-07-12 10:29:19.995228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.211 10:29:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.469 [2024-07-12 10:29:20.288276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.469 BaseBdev1 00:15:26.469 10:29:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:26.469 10:29:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:26.469 10:29:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:26.469 10:29:20 -- common/autotest_common.sh@889 -- # local i 00:15:26.469 10:29:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:26.469 10:29:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:26.469 10:29:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.727 10:29:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.986 [ 00:15:26.986 { 00:15:26.986 "name": "BaseBdev1", 00:15:26.986 "aliases": [ 00:15:26.986 "5a0076f6-0997-41f2-a753-d6baf048e989" 00:15:26.986 ], 00:15:26.986 "product_name": "Malloc disk", 00:15:26.986 "block_size": 512, 00:15:26.986 "num_blocks": 65536, 00:15:26.986 "uuid": "5a0076f6-0997-41f2-a753-d6baf048e989", 00:15:26.986 "assigned_rate_limits": { 00:15:26.986 "rw_ios_per_sec": 0, 00:15:26.986 "rw_mbytes_per_sec": 0, 00:15:26.986 "r_mbytes_per_sec": 0, 00:15:26.986 "w_mbytes_per_sec": 0 00:15:26.986 }, 00:15:26.986 "claimed": true, 00:15:26.986 "claim_type": "exclusive_write", 00:15:26.986 "zoned": false, 00:15:26.986 "supported_io_types": { 00:15:26.986 "read": true, 00:15:26.986 "write": true, 00:15:26.986 "unmap": true, 00:15:26.986 "write_zeroes": true, 00:15:26.986 "flush": true, 00:15:26.986 "reset": true, 00:15:26.986 "compare": false, 00:15:26.986 "compare_and_write": false, 00:15:26.986 "abort": true, 00:15:26.986 "nvme_admin": false, 00:15:26.986 "nvme_io": false 00:15:26.986 }, 00:15:26.986 "memory_domains": [ 00:15:26.986 { 00:15:26.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.986 "dma_device_type": 2 00:15:26.986 } 00:15:26.986 ], 00:15:26.986 "driver_specific": {} 00:15:26.986 } 00:15:26.986 ] 00:15:26.986 10:29:20 -- common/autotest_common.sh@895 -- # return 0 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.986 "name": "Existed_Raid", 00:15:26.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.986 "strip_size_kb": 64, 00:15:26.986 "state": "configuring", 00:15:26.986 "raid_level": "concat", 00:15:26.986 "superblock": false, 00:15:26.986 "num_base_bdevs": 2, 00:15:26.986 "num_base_bdevs_discovered": 1, 00:15:26.986 "num_base_bdevs_operational": 2, 00:15:26.986 "base_bdevs_list": [ 00:15:26.986 { 00:15:26.986 "name": "BaseBdev1", 00:15:26.986 "uuid": "5a0076f6-0997-41f2-a753-d6baf048e989", 00:15:26.986 "is_configured": true, 00:15:26.986 "data_offset": 0, 00:15:26.986 "data_size": 65536 00:15:26.986 }, 00:15:26.986 { 00:15:26.986 "name": "BaseBdev2", 00:15:26.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.986 "is_configured": false, 00:15:26.986 "data_offset": 0, 00:15:26.986 "data_size": 0 00:15:26.986 } 00:15:26.986 ] 00:15:26.986 }' 00:15:26.986 10:29:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.986 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:15:27.922 10:29:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:27.922 [2024-07-12 10:29:21.692516] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.922 [2024-07-12 10:29:21.692663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:27.922 10:29:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:27.922 10:29:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:28.179 [2024-07-12 10:29:21.944593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.179 [2024-07-12 10:29:21.946597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.179 [2024-07-12 10:29:21.946757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:28.179 10:29:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.180 10:29:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.437 10:29:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.437 "name": "Existed_Raid", 00:15:28.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.437 "strip_size_kb": 64, 00:15:28.437 "state": "configuring", 00:15:28.437 "raid_level": "concat", 00:15:28.437 "superblock": false, 00:15:28.437 "num_base_bdevs": 2, 00:15:28.437 "num_base_bdevs_discovered": 1, 00:15:28.437 "num_base_bdevs_operational": 2, 00:15:28.437 "base_bdevs_list": [ 00:15:28.437 { 00:15:28.437 "name": "BaseBdev1", 00:15:28.437 "uuid": "5a0076f6-0997-41f2-a753-d6baf048e989", 00:15:28.438 "is_configured": true, 00:15:28.438 "data_offset": 0, 00:15:28.438 "data_size": 65536 00:15:28.438 }, 00:15:28.438 { 00:15:28.438 "name": "BaseBdev2", 00:15:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.438 "is_configured": false, 00:15:28.438 "data_offset": 0, 00:15:28.438 "data_size": 0 00:15:28.438 } 00:15:28.438 ] 00:15:28.438 }' 00:15:28.438 10:29:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.438 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:29.005 10:29:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.262 [2024-07-12 10:29:23.046353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.262 [2024-07-12 10:29:23.046394] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:29.262 [2024-07-12 10:29:23.046414] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:29.262 [2024-07-12 10:29:23.046531] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:29.262 [2024-07-12 10:29:23.046866] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:29.262 [2024-07-12 10:29:23.046888] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:29.262 BaseBdev2 00:15:29.262 [2024-07-12 10:29:23.047140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.262 10:29:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:29.262 10:29:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:29.262 10:29:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:29.262 10:29:23 -- common/autotest_common.sh@889 -- # local i 00:15:29.262 10:29:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:29.262 10:29:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:29.263 10:29:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.519 10:29:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.776 [ 00:15:29.777 { 00:15:29.777 "name": "BaseBdev2", 00:15:29.777 "aliases": [ 00:15:29.777 "b5a3d2cf-7139-4851-80b5-dae76b80ea11" 00:15:29.777 ], 00:15:29.777 "product_name": "Malloc disk", 00:15:29.777 "block_size": 512, 00:15:29.777 "num_blocks": 65536, 00:15:29.777 "uuid": "b5a3d2cf-7139-4851-80b5-dae76b80ea11", 00:15:29.777 "assigned_rate_limits": { 00:15:29.777 "rw_ios_per_sec": 0, 00:15:29.777 "rw_mbytes_per_sec": 0, 00:15:29.777 "r_mbytes_per_sec": 0, 00:15:29.777 "w_mbytes_per_sec": 0 00:15:29.777 }, 00:15:29.777 "claimed": true, 00:15:29.777 "claim_type": "exclusive_write", 00:15:29.777 "zoned": false, 00:15:29.777 "supported_io_types": { 00:15:29.777 "read": true, 00:15:29.777 "write": true, 00:15:29.777 "unmap": true, 00:15:29.777 "write_zeroes": true, 00:15:29.777 "flush": true, 00:15:29.777 "reset": true, 00:15:29.777 "compare": false, 00:15:29.777 "compare_and_write": false, 00:15:29.777 "abort": true, 00:15:29.777 "nvme_admin": false, 00:15:29.777 "nvme_io": false 00:15:29.777 }, 00:15:29.777 "memory_domains": [ 00:15:29.777 { 00:15:29.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.777 "dma_device_type": 2 00:15:29.777 } 00:15:29.777 ], 00:15:29.777 "driver_specific": {} 00:15:29.777 } 00:15:29.777 ] 00:15:29.777 10:29:23 -- common/autotest_common.sh@895 -- # return 0 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.777 10:29:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.035 10:29:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.035 "name": "Existed_Raid", 00:15:30.035 "uuid": "d7a92649-f0fc-40d0-8c9f-ecbd5b69613b", 00:15:30.035 "strip_size_kb": 64, 00:15:30.035 "state": "online", 00:15:30.035 "raid_level": "concat", 00:15:30.035 "superblock": false, 00:15:30.035 "num_base_bdevs": 2, 00:15:30.035 "num_base_bdevs_discovered": 2, 00:15:30.035 "num_base_bdevs_operational": 2, 00:15:30.035 "base_bdevs_list": [ 00:15:30.035 { 00:15:30.035 "name": "BaseBdev1", 00:15:30.035 "uuid": "5a0076f6-0997-41f2-a753-d6baf048e989", 00:15:30.035 "is_configured": true, 00:15:30.035 "data_offset": 0, 00:15:30.035 "data_size": 65536 00:15:30.035 }, 00:15:30.035 { 00:15:30.035 "name": "BaseBdev2", 00:15:30.035 "uuid": "b5a3d2cf-7139-4851-80b5-dae76b80ea11", 00:15:30.035 "is_configured": true, 00:15:30.035 "data_offset": 0, 00:15:30.035 "data_size": 65536 00:15:30.035 } 00:15:30.035 ] 00:15:30.035 }' 00:15:30.035 10:29:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.035 10:29:23 -- common/autotest_common.sh@10 -- # set +x 00:15:30.600 10:29:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:30.600 [2024-07-12 10:29:24.494750] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.600 [2024-07-12 10:29:24.494788] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.600 [2024-07-12 10:29:24.494864] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.858 "name": "Existed_Raid", 00:15:30.858 "uuid": "d7a92649-f0fc-40d0-8c9f-ecbd5b69613b", 00:15:30.858 "strip_size_kb": 64, 00:15:30.858 "state": "offline", 00:15:30.858 "raid_level": "concat", 00:15:30.858 "superblock": false, 00:15:30.858 "num_base_bdevs": 2, 00:15:30.858 "num_base_bdevs_discovered": 1, 00:15:30.858 "num_base_bdevs_operational": 1, 00:15:30.858 "base_bdevs_list": [ 00:15:30.858 { 00:15:30.858 "name": null, 00:15:30.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.858 "is_configured": false, 00:15:30.858 "data_offset": 0, 00:15:30.858 "data_size": 65536 00:15:30.858 }, 00:15:30.858 { 00:15:30.858 "name": "BaseBdev2", 00:15:30.858 "uuid": "b5a3d2cf-7139-4851-80b5-dae76b80ea11", 00:15:30.858 "is_configured": true, 00:15:30.858 "data_offset": 0, 00:15:30.858 "data_size": 65536 00:15:30.858 } 00:15:30.858 ] 00:15:30.858 }' 00:15:30.858 10:29:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.858 10:29:24 -- common/autotest_common.sh@10 -- # set +x 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:31.791 10:29:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:32.049 [2024-07-12 10:29:25.802893] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.050 [2024-07-12 10:29:25.802974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:32.050 10:29:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:32.050 10:29:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:32.050 10:29:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.050 10:29:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:32.308 10:29:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:32.308 10:29:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:32.308 10:29:26 -- bdev/bdev_raid.sh@287 -- # killprocess 115894 00:15:32.308 10:29:26 -- common/autotest_common.sh@926 -- # '[' -z 115894 ']' 00:15:32.308 10:29:26 -- common/autotest_common.sh@930 -- # kill -0 115894 00:15:32.308 10:29:26 -- common/autotest_common.sh@931 -- # uname 00:15:32.308 10:29:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.308 10:29:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115894 00:15:32.308 killing process with pid 115894 00:15:32.308 10:29:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.308 10:29:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.308 10:29:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115894' 00:15:32.308 10:29:26 -- common/autotest_common.sh@945 -- # kill 115894 00:15:32.308 10:29:26 -- common/autotest_common.sh@950 -- # wait 115894 00:15:32.308 [2024-07-12 10:29:26.165677] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.308 [2024-07-12 10:29:26.165913] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.244 ************************************ 00:15:33.244 END TEST raid_state_function_test 00:15:33.244 ************************************ 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:33.244 00:15:33.244 real 0m9.624s 00:15:33.244 user 0m16.876s 00:15:33.244 sys 0m1.125s 00:15:33.244 10:29:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.244 10:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:33.244 10:29:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:33.244 10:29:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.244 10:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:33.244 ************************************ 00:15:33.244 START TEST raid_state_function_test_sb 00:15:33.244 ************************************ 00:15:33.244 10:29:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=116227 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116227' 00:15:33.244 Process raid pid: 116227 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:33.244 10:29:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116227 /var/tmp/spdk-raid.sock 00:15:33.244 10:29:27 -- common/autotest_common.sh@819 -- # '[' -z 116227 ']' 00:15:33.244 10:29:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:33.244 10:29:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:33.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:33.244 10:29:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:33.244 10:29:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:33.244 10:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:33.501 [2024-07-12 10:29:27.210678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:33.501 [2024-07-12 10:29:27.210875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.501 [2024-07-12 10:29:27.381053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.756 [2024-07-12 10:29:27.585154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.014 [2024-07-12 10:29:27.751369] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.272 10:29:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:34.272 10:29:28 -- common/autotest_common.sh@852 -- # return 0 00:15:34.272 10:29:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:34.530 [2024-07-12 10:29:28.323263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.530 [2024-07-12 10:29:28.323344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.530 [2024-07-12 10:29:28.323365] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.530 [2024-07-12 10:29:28.323385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.530 10:29:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.789 10:29:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.789 "name": "Existed_Raid", 00:15:34.789 "uuid": "83ac3b31-e507-4bfb-9260-232c16144dab", 00:15:34.789 "strip_size_kb": 64, 00:15:34.789 "state": "configuring", 00:15:34.789 "raid_level": "concat", 00:15:34.789 "superblock": true, 00:15:34.789 "num_base_bdevs": 2, 00:15:34.789 "num_base_bdevs_discovered": 0, 00:15:34.789 "num_base_bdevs_operational": 2, 00:15:34.789 "base_bdevs_list": [ 00:15:34.789 { 00:15:34.789 "name": "BaseBdev1", 00:15:34.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.789 "is_configured": false, 00:15:34.789 "data_offset": 0, 00:15:34.789 "data_size": 0 00:15:34.789 }, 00:15:34.789 { 00:15:34.789 "name": "BaseBdev2", 00:15:34.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.789 "is_configured": false, 00:15:34.789 "data_offset": 0, 00:15:34.789 "data_size": 0 00:15:34.789 } 00:15:34.789 ] 00:15:34.789 }' 00:15:34.789 10:29:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.789 10:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:35.356 10:29:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.356 [2024-07-12 10:29:29.251269] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.356 [2024-07-12 10:29:29.251303] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:35.356 10:29:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:35.615 [2024-07-12 10:29:29.507361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.615 [2024-07-12 10:29:29.507423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.615 [2024-07-12 10:29:29.507450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.615 [2024-07-12 10:29:29.507470] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.615 10:29:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.873 [2024-07-12 10:29:29.772318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.873 BaseBdev1 00:15:35.873 10:29:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:35.873 10:29:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:35.873 10:29:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:35.873 10:29:29 -- common/autotest_common.sh@889 -- # local i 00:15:35.873 10:29:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:35.873 10:29:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:35.873 10:29:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.131 10:29:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.389 [ 00:15:36.389 { 00:15:36.389 "name": "BaseBdev1", 00:15:36.389 "aliases": [ 00:15:36.389 "16840377-3082-4a1b-9b98-70613fb0335f" 00:15:36.389 ], 00:15:36.389 "product_name": "Malloc disk", 00:15:36.389 "block_size": 512, 00:15:36.389 "num_blocks": 65536, 00:15:36.389 "uuid": "16840377-3082-4a1b-9b98-70613fb0335f", 00:15:36.389 "assigned_rate_limits": { 00:15:36.389 "rw_ios_per_sec": 0, 00:15:36.389 "rw_mbytes_per_sec": 0, 00:15:36.389 "r_mbytes_per_sec": 0, 00:15:36.389 "w_mbytes_per_sec": 0 00:15:36.389 }, 00:15:36.389 "claimed": true, 00:15:36.389 "claim_type": "exclusive_write", 00:15:36.389 "zoned": false, 00:15:36.389 "supported_io_types": { 00:15:36.389 "read": true, 00:15:36.389 "write": true, 00:15:36.389 "unmap": true, 00:15:36.389 "write_zeroes": true, 00:15:36.389 "flush": true, 00:15:36.389 "reset": true, 00:15:36.389 "compare": false, 00:15:36.389 "compare_and_write": false, 00:15:36.389 "abort": true, 00:15:36.389 "nvme_admin": false, 00:15:36.389 "nvme_io": false 00:15:36.389 }, 00:15:36.389 "memory_domains": [ 00:15:36.389 { 00:15:36.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.389 "dma_device_type": 2 00:15:36.389 } 00:15:36.389 ], 00:15:36.389 "driver_specific": {} 00:15:36.389 } 00:15:36.389 ] 00:15:36.389 10:29:30 -- common/autotest_common.sh@895 -- # return 0 00:15:36.389 10:29:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:36.389 10:29:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.389 10:29:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.389 10:29:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.390 10:29:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.647 10:29:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.647 "name": "Existed_Raid", 00:15:36.647 "uuid": "4bb1ccd1-1a4e-4e60-ac6a-0baa56c0c496", 00:15:36.647 "strip_size_kb": 64, 00:15:36.647 "state": "configuring", 00:15:36.647 "raid_level": "concat", 00:15:36.647 "superblock": true, 00:15:36.647 "num_base_bdevs": 2, 00:15:36.647 "num_base_bdevs_discovered": 1, 00:15:36.647 "num_base_bdevs_operational": 2, 00:15:36.647 "base_bdevs_list": [ 00:15:36.647 { 00:15:36.647 "name": "BaseBdev1", 00:15:36.647 "uuid": "16840377-3082-4a1b-9b98-70613fb0335f", 00:15:36.647 "is_configured": true, 00:15:36.647 "data_offset": 2048, 00:15:36.647 "data_size": 63488 00:15:36.647 }, 00:15:36.647 { 00:15:36.647 "name": "BaseBdev2", 00:15:36.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.647 "is_configured": false, 00:15:36.647 "data_offset": 0, 00:15:36.647 "data_size": 0 00:15:36.647 } 00:15:36.647 ] 00:15:36.647 }' 00:15:36.647 10:29:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.647 10:29:30 -- common/autotest_common.sh@10 -- # set +x 00:15:37.212 10:29:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.470 [2024-07-12 10:29:31.208593] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.470 [2024-07-12 10:29:31.208640] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:37.470 10:29:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:37.470 10:29:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:37.728 10:29:31 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.985 BaseBdev1 00:15:37.985 10:29:31 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:37.985 10:29:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:37.985 10:29:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:37.985 10:29:31 -- common/autotest_common.sh@889 -- # local i 00:15:37.985 10:29:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:37.985 10:29:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:37.985 10:29:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.243 10:29:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.243 [ 00:15:38.243 { 00:15:38.243 "name": "BaseBdev1", 00:15:38.243 "aliases": [ 00:15:38.243 "1c6926fb-3d4a-4683-99e9-a8db545bea9f" 00:15:38.243 ], 00:15:38.243 "product_name": "Malloc disk", 00:15:38.243 "block_size": 512, 00:15:38.243 "num_blocks": 65536, 00:15:38.243 "uuid": "1c6926fb-3d4a-4683-99e9-a8db545bea9f", 00:15:38.243 "assigned_rate_limits": { 00:15:38.243 "rw_ios_per_sec": 0, 00:15:38.243 "rw_mbytes_per_sec": 0, 00:15:38.243 "r_mbytes_per_sec": 0, 00:15:38.243 "w_mbytes_per_sec": 0 00:15:38.243 }, 00:15:38.243 "claimed": false, 00:15:38.243 "zoned": false, 00:15:38.243 "supported_io_types": { 00:15:38.243 "read": true, 00:15:38.243 "write": true, 00:15:38.243 "unmap": true, 00:15:38.243 "write_zeroes": true, 00:15:38.243 "flush": true, 00:15:38.243 "reset": true, 00:15:38.243 "compare": false, 00:15:38.243 "compare_and_write": false, 00:15:38.243 "abort": true, 00:15:38.243 "nvme_admin": false, 00:15:38.243 "nvme_io": false 00:15:38.243 }, 00:15:38.243 "memory_domains": [ 00:15:38.243 { 00:15:38.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.243 "dma_device_type": 2 00:15:38.243 } 00:15:38.243 ], 00:15:38.243 "driver_specific": {} 00:15:38.243 } 00:15:38.243 ] 00:15:38.243 10:29:32 -- common/autotest_common.sh@895 -- # return 0 00:15:38.243 10:29:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:38.501 [2024-07-12 10:29:32.354239] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.502 [2024-07-12 10:29:32.356094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.502 [2024-07-12 10:29:32.356157] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.502 10:29:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.760 10:29:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.760 "name": "Existed_Raid", 00:15:38.760 "uuid": "9c27c73c-1579-4fa3-9b77-e31335e470bf", 00:15:38.760 "strip_size_kb": 64, 00:15:38.760 "state": "configuring", 00:15:38.760 "raid_level": "concat", 00:15:38.760 "superblock": true, 00:15:38.760 "num_base_bdevs": 2, 00:15:38.760 "num_base_bdevs_discovered": 1, 00:15:38.760 "num_base_bdevs_operational": 2, 00:15:38.760 "base_bdevs_list": [ 00:15:38.760 { 00:15:38.760 "name": "BaseBdev1", 00:15:38.760 "uuid": "1c6926fb-3d4a-4683-99e9-a8db545bea9f", 00:15:38.760 "is_configured": true, 00:15:38.760 "data_offset": 2048, 00:15:38.760 "data_size": 63488 00:15:38.760 }, 00:15:38.760 { 00:15:38.760 "name": "BaseBdev2", 00:15:38.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.760 "is_configured": false, 00:15:38.760 "data_offset": 0, 00:15:38.760 "data_size": 0 00:15:38.760 } 00:15:38.760 ] 00:15:38.760 }' 00:15:38.760 10:29:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.760 10:29:32 -- common/autotest_common.sh@10 -- # set +x 00:15:39.325 10:29:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.583 [2024-07-12 10:29:33.407697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.583 [2024-07-12 10:29:33.407902] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:39.583 [2024-07-12 10:29:33.407917] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:39.583 BaseBdev2 00:15:39.583 [2024-07-12 10:29:33.408033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:39.583 [2024-07-12 10:29:33.408356] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:39.583 [2024-07-12 10:29:33.408378] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:39.583 [2024-07-12 10:29:33.408517] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.583 10:29:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:39.583 10:29:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:39.583 10:29:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:39.583 10:29:33 -- common/autotest_common.sh@889 -- # local i 00:15:39.583 10:29:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:39.583 10:29:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:39.583 10:29:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.841 10:29:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.098 [ 00:15:40.098 { 00:15:40.098 "name": "BaseBdev2", 00:15:40.098 "aliases": [ 00:15:40.098 "f91a22b5-01e5-4978-a2f2-6005a3fe657b" 00:15:40.098 ], 00:15:40.099 "product_name": "Malloc disk", 00:15:40.099 "block_size": 512, 00:15:40.099 "num_blocks": 65536, 00:15:40.099 "uuid": "f91a22b5-01e5-4978-a2f2-6005a3fe657b", 00:15:40.099 "assigned_rate_limits": { 00:15:40.099 "rw_ios_per_sec": 0, 00:15:40.099 "rw_mbytes_per_sec": 0, 00:15:40.099 "r_mbytes_per_sec": 0, 00:15:40.099 "w_mbytes_per_sec": 0 00:15:40.099 }, 00:15:40.099 "claimed": true, 00:15:40.099 "claim_type": "exclusive_write", 00:15:40.099 "zoned": false, 00:15:40.099 "supported_io_types": { 00:15:40.099 "read": true, 00:15:40.099 "write": true, 00:15:40.099 "unmap": true, 00:15:40.099 "write_zeroes": true, 00:15:40.099 "flush": true, 00:15:40.099 "reset": true, 00:15:40.099 "compare": false, 00:15:40.099 "compare_and_write": false, 00:15:40.099 "abort": true, 00:15:40.099 "nvme_admin": false, 00:15:40.099 "nvme_io": false 00:15:40.099 }, 00:15:40.099 "memory_domains": [ 00:15:40.099 { 00:15:40.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.099 "dma_device_type": 2 00:15:40.099 } 00:15:40.099 ], 00:15:40.099 "driver_specific": {} 00:15:40.099 } 00:15:40.099 ] 00:15:40.099 10:29:33 -- common/autotest_common.sh@895 -- # return 0 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.099 10:29:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.357 10:29:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.357 "name": "Existed_Raid", 00:15:40.357 "uuid": "9c27c73c-1579-4fa3-9b77-e31335e470bf", 00:15:40.357 "strip_size_kb": 64, 00:15:40.357 "state": "online", 00:15:40.357 "raid_level": "concat", 00:15:40.357 "superblock": true, 00:15:40.357 "num_base_bdevs": 2, 00:15:40.357 "num_base_bdevs_discovered": 2, 00:15:40.357 "num_base_bdevs_operational": 2, 00:15:40.357 "base_bdevs_list": [ 00:15:40.357 { 00:15:40.357 "name": "BaseBdev1", 00:15:40.357 "uuid": "1c6926fb-3d4a-4683-99e9-a8db545bea9f", 00:15:40.357 "is_configured": true, 00:15:40.357 "data_offset": 2048, 00:15:40.357 "data_size": 63488 00:15:40.357 }, 00:15:40.357 { 00:15:40.357 "name": "BaseBdev2", 00:15:40.357 "uuid": "f91a22b5-01e5-4978-a2f2-6005a3fe657b", 00:15:40.358 "is_configured": true, 00:15:40.358 "data_offset": 2048, 00:15:40.358 "data_size": 63488 00:15:40.358 } 00:15:40.358 ] 00:15:40.358 }' 00:15:40.358 10:29:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.358 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.926 10:29:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.184 [2024-07-12 10:29:34.928192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.184 [2024-07-12 10:29:34.928223] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.184 [2024-07-12 10:29:34.928306] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.184 10:29:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.185 10:29:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.185 10:29:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.185 10:29:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.185 10:29:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.443 10:29:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.443 "name": "Existed_Raid", 00:15:41.443 "uuid": "9c27c73c-1579-4fa3-9b77-e31335e470bf", 00:15:41.443 "strip_size_kb": 64, 00:15:41.443 "state": "offline", 00:15:41.443 "raid_level": "concat", 00:15:41.443 "superblock": true, 00:15:41.443 "num_base_bdevs": 2, 00:15:41.443 "num_base_bdevs_discovered": 1, 00:15:41.443 "num_base_bdevs_operational": 1, 00:15:41.443 "base_bdevs_list": [ 00:15:41.443 { 00:15:41.443 "name": null, 00:15:41.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.443 "is_configured": false, 00:15:41.443 "data_offset": 2048, 00:15:41.443 "data_size": 63488 00:15:41.443 }, 00:15:41.443 { 00:15:41.443 "name": "BaseBdev2", 00:15:41.443 "uuid": "f91a22b5-01e5-4978-a2f2-6005a3fe657b", 00:15:41.443 "is_configured": true, 00:15:41.443 "data_offset": 2048, 00:15:41.443 "data_size": 63488 00:15:41.443 } 00:15:41.443 ] 00:15:41.443 }' 00:15:41.443 10:29:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.443 10:29:35 -- common/autotest_common.sh@10 -- # set +x 00:15:42.007 10:29:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:42.007 10:29:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.007 10:29:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.007 10:29:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:42.265 10:29:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:42.265 10:29:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.265 10:29:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.523 [2024-07-12 10:29:36.368589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.523 [2024-07-12 10:29:36.368655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:42.782 10:29:36 -- bdev/bdev_raid.sh@287 -- # killprocess 116227 00:15:42.782 10:29:36 -- common/autotest_common.sh@926 -- # '[' -z 116227 ']' 00:15:42.782 10:29:36 -- common/autotest_common.sh@930 -- # kill -0 116227 00:15:42.782 10:29:36 -- common/autotest_common.sh@931 -- # uname 00:15:42.782 10:29:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:42.782 10:29:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116227 00:15:42.782 killing process with pid 116227 00:15:42.782 10:29:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:42.782 10:29:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:42.782 10:29:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116227' 00:15:42.782 10:29:36 -- common/autotest_common.sh@945 -- # kill 116227 00:15:42.782 10:29:36 -- common/autotest_common.sh@950 -- # wait 116227 00:15:42.782 [2024-07-12 10:29:36.682544] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.782 [2024-07-12 10:29:36.682651] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.718 ************************************ 00:15:43.718 END TEST raid_state_function_test_sb 00:15:43.718 ************************************ 00:15:43.718 10:29:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:43.718 00:15:43.718 real 0m10.448s 00:15:43.718 user 0m18.523s 00:15:43.718 sys 0m0.993s 00:15:43.718 10:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.718 10:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:43.977 10:29:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:43.977 10:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:43.977 10:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:43.977 ************************************ 00:15:43.977 START TEST raid_superblock_test 00:15:43.977 ************************************ 00:15:43.977 10:29:37 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=116569 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116569 /var/tmp/spdk-raid.sock 00:15:43.977 10:29:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:43.977 10:29:37 -- common/autotest_common.sh@819 -- # '[' -z 116569 ']' 00:15:43.977 10:29:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:43.977 10:29:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:43.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:43.977 10:29:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:43.977 10:29:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:43.977 10:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:43.977 [2024-07-12 10:29:37.716693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:43.977 [2024-07-12 10:29:37.716917] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116569 ] 00:15:43.977 [2024-07-12 10:29:37.879148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.237 [2024-07-12 10:29:38.060888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.496 [2024-07-12 10:29:38.245572] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.755 10:29:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:44.755 10:29:38 -- common/autotest_common.sh@852 -- # return 0 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:44.755 10:29:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:45.014 malloc1 00:15:45.014 10:29:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.272 [2024-07-12 10:29:38.991154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.272 [2024-07-12 10:29:38.991505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.272 [2024-07-12 10:29:38.991648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:45.272 [2024-07-12 10:29:38.991797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.272 [2024-07-12 10:29:38.993760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.272 [2024-07-12 10:29:38.993946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.272 pt1 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.272 10:29:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:45.530 malloc2 00:15:45.530 10:29:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.789 [2024-07-12 10:29:39.468782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.789 [2024-07-12 10:29:39.469011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.789 [2024-07-12 10:29:39.469084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:45.789 [2024-07-12 10:29:39.469232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.789 [2024-07-12 10:29:39.471224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.789 [2024-07-12 10:29:39.471396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.789 pt2 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:45.789 [2024-07-12 10:29:39.656868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.789 [2024-07-12 10:29:39.658843] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.789 [2024-07-12 10:29:39.659130] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:45.789 [2024-07-12 10:29:39.659269] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.789 [2024-07-12 10:29:39.659438] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:45.789 [2024-07-12 10:29:39.659817] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:45.789 [2024-07-12 10:29:39.659975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:45.789 [2024-07-12 10:29:39.660203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.789 10:29:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.047 10:29:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.047 "name": "raid_bdev1", 00:15:46.047 "uuid": "30bbaec2-17d4-4b8d-afdc-8c8e08525277", 00:15:46.047 "strip_size_kb": 64, 00:15:46.047 "state": "online", 00:15:46.047 "raid_level": "concat", 00:15:46.047 "superblock": true, 00:15:46.047 "num_base_bdevs": 2, 00:15:46.047 "num_base_bdevs_discovered": 2, 00:15:46.048 "num_base_bdevs_operational": 2, 00:15:46.048 "base_bdevs_list": [ 00:15:46.048 { 00:15:46.048 "name": "pt1", 00:15:46.048 "uuid": "e4c537c8-0a08-5849-a76a-04ad59b5c240", 00:15:46.048 "is_configured": true, 00:15:46.048 "data_offset": 2048, 00:15:46.048 "data_size": 63488 00:15:46.048 }, 00:15:46.048 { 00:15:46.048 "name": "pt2", 00:15:46.048 "uuid": "88d98a36-8738-5401-a55e-b9a31a48e35e", 00:15:46.048 "is_configured": true, 00:15:46.048 "data_offset": 2048, 00:15:46.048 "data_size": 63488 00:15:46.048 } 00:15:46.048 ] 00:15:46.048 }' 00:15:46.048 10:29:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.048 10:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:46.614 10:29:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.614 10:29:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:46.873 [2024-07-12 10:29:40.689142] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.873 10:29:40 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=30bbaec2-17d4-4b8d-afdc-8c8e08525277 00:15:46.873 10:29:40 -- bdev/bdev_raid.sh@380 -- # '[' -z 30bbaec2-17d4-4b8d-afdc-8c8e08525277 ']' 00:15:46.873 10:29:40 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:47.131 [2024-07-12 10:29:40.928997] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.131 [2024-07-12 10:29:40.929124] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.131 [2024-07-12 10:29:40.929272] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.131 [2024-07-12 10:29:40.929349] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.131 [2024-07-12 10:29:40.929644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:47.131 10:29:40 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.131 10:29:40 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:47.390 10:29:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:47.390 10:29:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:47.390 10:29:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.390 10:29:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:47.649 10:29:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.649 10:29:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:47.649 10:29:41 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:47.649 10:29:41 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:47.907 10:29:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:47.907 10:29:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:47.907 10:29:41 -- common/autotest_common.sh@640 -- # local es=0 00:15:47.907 10:29:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:47.907 10:29:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.907 10:29:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:47.907 10:29:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.907 10:29:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:47.907 10:29:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.907 10:29:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:47.907 10:29:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.907 10:29:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:47.907 10:29:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:48.166 [2024-07-12 10:29:42.009151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:48.166 [2024-07-12 10:29:42.011018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:48.166 [2024-07-12 10:29:42.011077] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:48.166 [2024-07-12 10:29:42.011137] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:48.166 [2024-07-12 10:29:42.011172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.166 [2024-07-12 10:29:42.011181] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:48.166 request: 00:15:48.166 { 00:15:48.166 "name": "raid_bdev1", 00:15:48.166 "raid_level": "concat", 00:15:48.166 "base_bdevs": [ 00:15:48.166 "malloc1", 00:15:48.166 "malloc2" 00:15:48.166 ], 00:15:48.166 "superblock": false, 00:15:48.166 "strip_size_kb": 64, 00:15:48.166 "method": "bdev_raid_create", 00:15:48.166 "req_id": 1 00:15:48.166 } 00:15:48.166 Got JSON-RPC error response 00:15:48.166 response: 00:15:48.166 { 00:15:48.166 "code": -17, 00:15:48.166 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:48.166 } 00:15:48.166 10:29:42 -- common/autotest_common.sh@643 -- # es=1 00:15:48.166 10:29:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:48.166 10:29:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:48.166 10:29:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:48.166 10:29:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:48.166 10:29:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.423 10:29:42 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:48.423 10:29:42 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:48.423 10:29:42 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.682 [2024-07-12 10:29:42.457160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.682 [2024-07-12 10:29:42.457235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.682 [2024-07-12 10:29:42.457267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.682 [2024-07-12 10:29:42.457291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.682 [2024-07-12 10:29:42.459396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.682 [2024-07-12 10:29:42.459444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.682 [2024-07-12 10:29:42.459520] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:48.682 [2024-07-12 10:29:42.459572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.682 pt1 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.682 10:29:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.941 10:29:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.941 "name": "raid_bdev1", 00:15:48.941 "uuid": "30bbaec2-17d4-4b8d-afdc-8c8e08525277", 00:15:48.941 "strip_size_kb": 64, 00:15:48.941 "state": "configuring", 00:15:48.941 "raid_level": "concat", 00:15:48.941 "superblock": true, 00:15:48.941 "num_base_bdevs": 2, 00:15:48.941 "num_base_bdevs_discovered": 1, 00:15:48.941 "num_base_bdevs_operational": 2, 00:15:48.941 "base_bdevs_list": [ 00:15:48.941 { 00:15:48.941 "name": "pt1", 00:15:48.941 "uuid": "e4c537c8-0a08-5849-a76a-04ad59b5c240", 00:15:48.941 "is_configured": true, 00:15:48.941 "data_offset": 2048, 00:15:48.941 "data_size": 63488 00:15:48.941 }, 00:15:48.941 { 00:15:48.941 "name": null, 00:15:48.941 "uuid": "88d98a36-8738-5401-a55e-b9a31a48e35e", 00:15:48.941 "is_configured": false, 00:15:48.941 "data_offset": 2048, 00:15:48.941 "data_size": 63488 00:15:48.941 } 00:15:48.941 ] 00:15:48.941 }' 00:15:48.941 10:29:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.941 10:29:42 -- common/autotest_common.sh@10 -- # set +x 00:15:49.506 10:29:43 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:49.506 10:29:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:49.506 10:29:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:49.506 10:29:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.796 [2024-07-12 10:29:43.525349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.796 [2024-07-12 10:29:43.525419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.796 [2024-07-12 10:29:43.525450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:49.796 [2024-07-12 10:29:43.525475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.796 [2024-07-12 10:29:43.525881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.796 [2024-07-12 10:29:43.525924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.796 [2024-07-12 10:29:43.526006] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:49.796 [2024-07-12 10:29:43.526029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.796 [2024-07-12 10:29:43.526127] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:49.796 [2024-07-12 10:29:43.526145] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:49.796 [2024-07-12 10:29:43.526251] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:49.796 [2024-07-12 10:29:43.526535] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:49.796 [2024-07-12 10:29:43.526554] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:49.796 [2024-07-12 10:29:43.526663] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.796 pt2 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.796 10:29:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.096 10:29:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.097 "name": "raid_bdev1", 00:15:50.097 "uuid": "30bbaec2-17d4-4b8d-afdc-8c8e08525277", 00:15:50.097 "strip_size_kb": 64, 00:15:50.097 "state": "online", 00:15:50.097 "raid_level": "concat", 00:15:50.097 "superblock": true, 00:15:50.097 "num_base_bdevs": 2, 00:15:50.097 "num_base_bdevs_discovered": 2, 00:15:50.097 "num_base_bdevs_operational": 2, 00:15:50.097 "base_bdevs_list": [ 00:15:50.097 { 00:15:50.097 "name": "pt1", 00:15:50.097 "uuid": "e4c537c8-0a08-5849-a76a-04ad59b5c240", 00:15:50.097 "is_configured": true, 00:15:50.097 "data_offset": 2048, 00:15:50.097 "data_size": 63488 00:15:50.097 }, 00:15:50.097 { 00:15:50.097 "name": "pt2", 00:15:50.097 "uuid": "88d98a36-8738-5401-a55e-b9a31a48e35e", 00:15:50.097 "is_configured": true, 00:15:50.097 "data_offset": 2048, 00:15:50.097 "data_size": 63488 00:15:50.097 } 00:15:50.097 ] 00:15:50.097 }' 00:15:50.097 10:29:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.097 10:29:43 -- common/autotest_common.sh@10 -- # set +x 00:15:50.666 10:29:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:50.666 10:29:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:50.924 [2024-07-12 10:29:44.608196] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.924 10:29:44 -- bdev/bdev_raid.sh@430 -- # '[' 30bbaec2-17d4-4b8d-afdc-8c8e08525277 '!=' 30bbaec2-17d4-4b8d-afdc-8c8e08525277 ']' 00:15:50.924 10:29:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:50.924 10:29:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:50.924 10:29:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:50.924 10:29:44 -- bdev/bdev_raid.sh@511 -- # killprocess 116569 00:15:50.924 10:29:44 -- common/autotest_common.sh@926 -- # '[' -z 116569 ']' 00:15:50.924 10:29:44 -- common/autotest_common.sh@930 -- # kill -0 116569 00:15:50.924 10:29:44 -- common/autotest_common.sh@931 -- # uname 00:15:50.924 10:29:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.924 10:29:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116569 00:15:50.924 10:29:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:50.924 10:29:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:50.924 10:29:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116569' 00:15:50.924 killing process with pid 116569 00:15:50.924 10:29:44 -- common/autotest_common.sh@945 -- # kill 116569 00:15:50.924 10:29:44 -- common/autotest_common.sh@950 -- # wait 116569 00:15:50.924 [2024-07-12 10:29:44.637140] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.924 [2024-07-12 10:29:44.637212] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.924 [2024-07-12 10:29:44.637269] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.924 [2024-07-12 10:29:44.637280] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:50.924 [2024-07-12 10:29:44.770568] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.860 ************************************ 00:15:51.860 END TEST raid_superblock_test 00:15:51.860 ************************************ 00:15:51.860 10:29:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:51.860 00:15:51.860 real 0m8.122s 00:15:51.860 user 0m13.817s 00:15:51.860 sys 0m0.994s 00:15:51.860 10:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.860 10:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:52.119 10:29:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:52.119 10:29:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.119 10:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:52.119 ************************************ 00:15:52.119 START TEST raid_state_function_test 00:15:52.119 ************************************ 00:15:52.119 10:29:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=116833 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116833' 00:15:52.119 Process raid pid: 116833 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116833 /var/tmp/spdk-raid.sock 00:15:52.119 10:29:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:52.119 10:29:45 -- common/autotest_common.sh@819 -- # '[' -z 116833 ']' 00:15:52.119 10:29:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.119 10:29:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.119 10:29:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.119 10:29:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.119 10:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:52.119 [2024-07-12 10:29:45.897431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:52.119 [2024-07-12 10:29:45.897651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.377 [2024-07-12 10:29:46.066956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.377 [2024-07-12 10:29:46.247578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.636 [2024-07-12 10:29:46.435327] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.894 10:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:52.894 10:29:46 -- common/autotest_common.sh@852 -- # return 0 00:15:52.894 10:29:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:53.152 [2024-07-12 10:29:46.961433] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.152 [2024-07-12 10:29:46.961535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.152 [2024-07-12 10:29:46.961548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.152 [2024-07-12 10:29:46.961566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.152 10:29:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.411 10:29:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.411 "name": "Existed_Raid", 00:15:53.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.411 "strip_size_kb": 0, 00:15:53.411 "state": "configuring", 00:15:53.411 "raid_level": "raid1", 00:15:53.411 "superblock": false, 00:15:53.411 "num_base_bdevs": 2, 00:15:53.411 "num_base_bdevs_discovered": 0, 00:15:53.411 "num_base_bdevs_operational": 2, 00:15:53.411 "base_bdevs_list": [ 00:15:53.411 { 00:15:53.411 "name": "BaseBdev1", 00:15:53.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.411 "is_configured": false, 00:15:53.411 "data_offset": 0, 00:15:53.411 "data_size": 0 00:15:53.411 }, 00:15:53.411 { 00:15:53.411 "name": "BaseBdev2", 00:15:53.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.411 "is_configured": false, 00:15:53.411 "data_offset": 0, 00:15:53.411 "data_size": 0 00:15:53.411 } 00:15:53.411 ] 00:15:53.411 }' 00:15:53.411 10:29:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.411 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:15:53.978 10:29:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.235 [2024-07-12 10:29:48.081512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.235 [2024-07-12 10:29:48.081543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:54.235 10:29:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:54.493 [2024-07-12 10:29:48.341596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.493 [2024-07-12 10:29:48.341662] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.493 [2024-07-12 10:29:48.341673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.493 [2024-07-12 10:29:48.341697] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.493 10:29:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.751 [2024-07-12 10:29:48.558935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.751 BaseBdev1 00:15:54.751 10:29:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:54.751 10:29:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:54.751 10:29:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:54.751 10:29:48 -- common/autotest_common.sh@889 -- # local i 00:15:54.751 10:29:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:54.751 10:29:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:54.751 10:29:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.008 10:29:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.266 [ 00:15:55.266 { 00:15:55.266 "name": "BaseBdev1", 00:15:55.266 "aliases": [ 00:15:55.266 "b3697ccd-d6b4-452d-80b0-d4cdd1c53b3f" 00:15:55.266 ], 00:15:55.266 "product_name": "Malloc disk", 00:15:55.266 "block_size": 512, 00:15:55.266 "num_blocks": 65536, 00:15:55.266 "uuid": "b3697ccd-d6b4-452d-80b0-d4cdd1c53b3f", 00:15:55.266 "assigned_rate_limits": { 00:15:55.266 "rw_ios_per_sec": 0, 00:15:55.266 "rw_mbytes_per_sec": 0, 00:15:55.266 "r_mbytes_per_sec": 0, 00:15:55.266 "w_mbytes_per_sec": 0 00:15:55.266 }, 00:15:55.266 "claimed": true, 00:15:55.266 "claim_type": "exclusive_write", 00:15:55.266 "zoned": false, 00:15:55.266 "supported_io_types": { 00:15:55.266 "read": true, 00:15:55.266 "write": true, 00:15:55.266 "unmap": true, 00:15:55.266 "write_zeroes": true, 00:15:55.266 "flush": true, 00:15:55.266 "reset": true, 00:15:55.266 "compare": false, 00:15:55.266 "compare_and_write": false, 00:15:55.266 "abort": true, 00:15:55.266 "nvme_admin": false, 00:15:55.266 "nvme_io": false 00:15:55.266 }, 00:15:55.266 "memory_domains": [ 00:15:55.266 { 00:15:55.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.266 "dma_device_type": 2 00:15:55.266 } 00:15:55.266 ], 00:15:55.266 "driver_specific": {} 00:15:55.266 } 00:15:55.266 ] 00:15:55.266 10:29:48 -- common/autotest_common.sh@895 -- # return 0 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.266 10:29:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.524 10:29:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.524 "name": "Existed_Raid", 00:15:55.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.524 "strip_size_kb": 0, 00:15:55.524 "state": "configuring", 00:15:55.524 "raid_level": "raid1", 00:15:55.524 "superblock": false, 00:15:55.524 "num_base_bdevs": 2, 00:15:55.524 "num_base_bdevs_discovered": 1, 00:15:55.524 "num_base_bdevs_operational": 2, 00:15:55.524 "base_bdevs_list": [ 00:15:55.524 { 00:15:55.524 "name": "BaseBdev1", 00:15:55.524 "uuid": "b3697ccd-d6b4-452d-80b0-d4cdd1c53b3f", 00:15:55.524 "is_configured": true, 00:15:55.524 "data_offset": 0, 00:15:55.524 "data_size": 65536 00:15:55.524 }, 00:15:55.524 { 00:15:55.524 "name": "BaseBdev2", 00:15:55.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.524 "is_configured": false, 00:15:55.524 "data_offset": 0, 00:15:55.524 "data_size": 0 00:15:55.524 } 00:15:55.524 ] 00:15:55.524 }' 00:15:55.524 10:29:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.524 10:29:49 -- common/autotest_common.sh@10 -- # set +x 00:15:56.090 10:29:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.348 [2024-07-12 10:29:50.083202] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.348 [2024-07-12 10:29:50.083235] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:56.348 10:29:50 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:56.348 10:29:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.606 [2024-07-12 10:29:50.319283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.606 [2024-07-12 10:29:50.321113] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.606 [2024-07-12 10:29:50.321163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.606 10:29:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.864 10:29:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.864 "name": "Existed_Raid", 00:15:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.864 "strip_size_kb": 0, 00:15:56.864 "state": "configuring", 00:15:56.864 "raid_level": "raid1", 00:15:56.864 "superblock": false, 00:15:56.864 "num_base_bdevs": 2, 00:15:56.864 "num_base_bdevs_discovered": 1, 00:15:56.864 "num_base_bdevs_operational": 2, 00:15:56.864 "base_bdevs_list": [ 00:15:56.864 { 00:15:56.864 "name": "BaseBdev1", 00:15:56.864 "uuid": "b3697ccd-d6b4-452d-80b0-d4cdd1c53b3f", 00:15:56.864 "is_configured": true, 00:15:56.864 "data_offset": 0, 00:15:56.864 "data_size": 65536 00:15:56.864 }, 00:15:56.864 { 00:15:56.864 "name": "BaseBdev2", 00:15:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.864 "is_configured": false, 00:15:56.864 "data_offset": 0, 00:15:56.864 "data_size": 0 00:15:56.864 } 00:15:56.864 ] 00:15:56.864 }' 00:15:56.864 10:29:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.864 10:29:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.429 10:29:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:57.688 [2024-07-12 10:29:51.465260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.688 [2024-07-12 10:29:51.465310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:57.688 [2024-07-12 10:29:51.465320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:57.688 [2024-07-12 10:29:51.465459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:57.688 [2024-07-12 10:29:51.465798] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:57.688 [2024-07-12 10:29:51.465819] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:57.688 [2024-07-12 10:29:51.466047] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.688 BaseBdev2 00:15:57.688 10:29:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:57.688 10:29:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:57.688 10:29:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:57.688 10:29:51 -- common/autotest_common.sh@889 -- # local i 00:15:57.688 10:29:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:57.688 10:29:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:57.688 10:29:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.946 10:29:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.204 [ 00:15:58.204 { 00:15:58.204 "name": "BaseBdev2", 00:15:58.204 "aliases": [ 00:15:58.204 "dcddab90-ff66-475e-b646-14e6d49c276e" 00:15:58.204 ], 00:15:58.204 "product_name": "Malloc disk", 00:15:58.204 "block_size": 512, 00:15:58.204 "num_blocks": 65536, 00:15:58.204 "uuid": "dcddab90-ff66-475e-b646-14e6d49c276e", 00:15:58.204 "assigned_rate_limits": { 00:15:58.204 "rw_ios_per_sec": 0, 00:15:58.204 "rw_mbytes_per_sec": 0, 00:15:58.204 "r_mbytes_per_sec": 0, 00:15:58.204 "w_mbytes_per_sec": 0 00:15:58.204 }, 00:15:58.204 "claimed": true, 00:15:58.204 "claim_type": "exclusive_write", 00:15:58.204 "zoned": false, 00:15:58.204 "supported_io_types": { 00:15:58.204 "read": true, 00:15:58.204 "write": true, 00:15:58.204 "unmap": true, 00:15:58.204 "write_zeroes": true, 00:15:58.204 "flush": true, 00:15:58.204 "reset": true, 00:15:58.204 "compare": false, 00:15:58.204 "compare_and_write": false, 00:15:58.204 "abort": true, 00:15:58.204 "nvme_admin": false, 00:15:58.204 "nvme_io": false 00:15:58.204 }, 00:15:58.204 "memory_domains": [ 00:15:58.204 { 00:15:58.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.204 "dma_device_type": 2 00:15:58.204 } 00:15:58.204 ], 00:15:58.204 "driver_specific": {} 00:15:58.204 } 00:15:58.204 ] 00:15:58.204 10:29:51 -- common/autotest_common.sh@895 -- # return 0 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.204 10:29:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.204 10:29:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.204 "name": "Existed_Raid", 00:15:58.204 "uuid": "99b73195-cb7f-4819-b5d4-7743901e771f", 00:15:58.204 "strip_size_kb": 0, 00:15:58.204 "state": "online", 00:15:58.204 "raid_level": "raid1", 00:15:58.204 "superblock": false, 00:15:58.204 "num_base_bdevs": 2, 00:15:58.204 "num_base_bdevs_discovered": 2, 00:15:58.204 "num_base_bdevs_operational": 2, 00:15:58.204 "base_bdevs_list": [ 00:15:58.204 { 00:15:58.204 "name": "BaseBdev1", 00:15:58.204 "uuid": "b3697ccd-d6b4-452d-80b0-d4cdd1c53b3f", 00:15:58.204 "is_configured": true, 00:15:58.204 "data_offset": 0, 00:15:58.204 "data_size": 65536 00:15:58.204 }, 00:15:58.204 { 00:15:58.204 "name": "BaseBdev2", 00:15:58.204 "uuid": "dcddab90-ff66-475e-b646-14e6d49c276e", 00:15:58.204 "is_configured": true, 00:15:58.204 "data_offset": 0, 00:15:58.204 "data_size": 65536 00:15:58.204 } 00:15:58.204 ] 00:15:58.204 }' 00:15:58.204 10:29:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.204 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:59.138 [2024-07-12 10:29:52.917602] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.138 10:29:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.396 10:29:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.396 "name": "Existed_Raid", 00:15:59.396 "uuid": "99b73195-cb7f-4819-b5d4-7743901e771f", 00:15:59.396 "strip_size_kb": 0, 00:15:59.396 "state": "online", 00:15:59.396 "raid_level": "raid1", 00:15:59.396 "superblock": false, 00:15:59.396 "num_base_bdevs": 2, 00:15:59.396 "num_base_bdevs_discovered": 1, 00:15:59.396 "num_base_bdevs_operational": 1, 00:15:59.396 "base_bdevs_list": [ 00:15:59.396 { 00:15:59.396 "name": null, 00:15:59.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.396 "is_configured": false, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 }, 00:15:59.396 { 00:15:59.396 "name": "BaseBdev2", 00:15:59.396 "uuid": "dcddab90-ff66-475e-b646-14e6d49c276e", 00:15:59.396 "is_configured": true, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 } 00:15:59.396 ] 00:15:59.396 }' 00:15:59.396 10:29:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.396 10:29:53 -- common/autotest_common.sh@10 -- # set +x 00:16:00.330 10:29:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:00.331 10:29:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:00.331 10:29:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.331 10:29:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:00.331 10:29:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:00.331 10:29:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.331 10:29:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.589 [2024-07-12 10:29:54.364129] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.589 [2024-07-12 10:29:54.364160] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.589 [2024-07-12 10:29:54.364222] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.589 [2024-07-12 10:29:54.430504] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.590 [2024-07-12 10:29:54.430537] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:00.590 10:29:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:00.590 10:29:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:00.590 10:29:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.590 10:29:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.848 10:29:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:00.848 10:29:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:00.848 10:29:54 -- bdev/bdev_raid.sh@287 -- # killprocess 116833 00:16:00.848 10:29:54 -- common/autotest_common.sh@926 -- # '[' -z 116833 ']' 00:16:00.848 10:29:54 -- common/autotest_common.sh@930 -- # kill -0 116833 00:16:00.848 10:29:54 -- common/autotest_common.sh@931 -- # uname 00:16:00.848 10:29:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:00.848 10:29:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116833 00:16:00.848 killing process with pid 116833 00:16:00.848 10:29:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:00.848 10:29:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:00.848 10:29:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116833' 00:16:00.848 10:29:54 -- common/autotest_common.sh@945 -- # kill 116833 00:16:00.848 10:29:54 -- common/autotest_common.sh@950 -- # wait 116833 00:16:00.848 [2024-07-12 10:29:54.643662] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.848 [2024-07-12 10:29:54.643805] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.784 ************************************ 00:16:01.784 END TEST raid_state_function_test 00:16:01.784 ************************************ 00:16:01.784 10:29:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:01.784 00:16:01.784 real 0m9.825s 00:16:01.784 user 0m17.297s 00:16:01.784 sys 0m1.058s 00:16:01.784 10:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.784 10:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 10:29:55 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:01.784 10:29:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:01.784 10:29:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:01.784 10:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 ************************************ 00:16:02.044 START TEST raid_state_function_test_sb 00:16:02.044 ************************************ 00:16:02.044 10:29:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=117162 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117162' 00:16:02.044 Process raid pid: 117162 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:02.044 10:29:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117162 /var/tmp/spdk-raid.sock 00:16:02.044 10:29:55 -- common/autotest_common.sh@819 -- # '[' -z 117162 ']' 00:16:02.044 10:29:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:02.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:02.044 10:29:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.044 10:29:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:02.044 10:29:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.044 10:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:02.044 [2024-07-12 10:29:55.786666] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:02.044 [2024-07-12 10:29:55.786869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.044 [2024-07-12 10:29:55.947139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.303 [2024-07-12 10:29:56.127961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.561 [2024-07-12 10:29:56.317563] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.820 10:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:02.820 10:29:56 -- common/autotest_common.sh@852 -- # return 0 00:16:02.820 10:29:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:03.078 [2024-07-12 10:29:56.920014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.078 [2024-07-12 10:29:56.920089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.078 [2024-07-12 10:29:56.920102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.078 [2024-07-12 10:29:56.920124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.078 10:29:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.336 10:29:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.336 "name": "Existed_Raid", 00:16:03.336 "uuid": "6afe27de-e530-4cff-8a0f-3d5ccacd02a0", 00:16:03.336 "strip_size_kb": 0, 00:16:03.336 "state": "configuring", 00:16:03.336 "raid_level": "raid1", 00:16:03.336 "superblock": true, 00:16:03.336 "num_base_bdevs": 2, 00:16:03.336 "num_base_bdevs_discovered": 0, 00:16:03.336 "num_base_bdevs_operational": 2, 00:16:03.336 "base_bdevs_list": [ 00:16:03.336 { 00:16:03.336 "name": "BaseBdev1", 00:16:03.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.336 "is_configured": false, 00:16:03.336 "data_offset": 0, 00:16:03.336 "data_size": 0 00:16:03.336 }, 00:16:03.336 { 00:16:03.336 "name": "BaseBdev2", 00:16:03.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.336 "is_configured": false, 00:16:03.336 "data_offset": 0, 00:16:03.336 "data_size": 0 00:16:03.336 } 00:16:03.336 ] 00:16:03.336 }' 00:16:03.337 10:29:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.337 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:16:04.271 10:29:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.271 [2024-07-12 10:29:58.128022] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.271 [2024-07-12 10:29:58.128052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:04.271 10:29:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:04.528 [2024-07-12 10:29:58.300089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.528 [2024-07-12 10:29:58.300153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.528 [2024-07-12 10:29:58.300164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.528 [2024-07-12 10:29:58.300187] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.528 10:29:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.785 [2024-07-12 10:29:58.513318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.785 BaseBdev1 00:16:04.785 10:29:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:04.785 10:29:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:04.785 10:29:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:04.785 10:29:58 -- common/autotest_common.sh@889 -- # local i 00:16:04.785 10:29:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:04.785 10:29:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:04.785 10:29:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.785 10:29:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.043 [ 00:16:05.043 { 00:16:05.043 "name": "BaseBdev1", 00:16:05.043 "aliases": [ 00:16:05.043 "1b52727c-53f3-49fe-8630-bf56ec614c2d" 00:16:05.043 ], 00:16:05.043 "product_name": "Malloc disk", 00:16:05.043 "block_size": 512, 00:16:05.043 "num_blocks": 65536, 00:16:05.043 "uuid": "1b52727c-53f3-49fe-8630-bf56ec614c2d", 00:16:05.043 "assigned_rate_limits": { 00:16:05.043 "rw_ios_per_sec": 0, 00:16:05.043 "rw_mbytes_per_sec": 0, 00:16:05.043 "r_mbytes_per_sec": 0, 00:16:05.043 "w_mbytes_per_sec": 0 00:16:05.043 }, 00:16:05.043 "claimed": true, 00:16:05.043 "claim_type": "exclusive_write", 00:16:05.043 "zoned": false, 00:16:05.043 "supported_io_types": { 00:16:05.043 "read": true, 00:16:05.043 "write": true, 00:16:05.043 "unmap": true, 00:16:05.043 "write_zeroes": true, 00:16:05.043 "flush": true, 00:16:05.043 "reset": true, 00:16:05.043 "compare": false, 00:16:05.043 "compare_and_write": false, 00:16:05.043 "abort": true, 00:16:05.043 "nvme_admin": false, 00:16:05.043 "nvme_io": false 00:16:05.043 }, 00:16:05.043 "memory_domains": [ 00:16:05.043 { 00:16:05.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.043 "dma_device_type": 2 00:16:05.043 } 00:16:05.043 ], 00:16:05.043 "driver_specific": {} 00:16:05.043 } 00:16:05.043 ] 00:16:05.043 10:29:58 -- common/autotest_common.sh@895 -- # return 0 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.043 10:29:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.301 10:29:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.301 "name": "Existed_Raid", 00:16:05.301 "uuid": "a788f323-aec4-468b-bc6b-02f1ab02c77e", 00:16:05.301 "strip_size_kb": 0, 00:16:05.301 "state": "configuring", 00:16:05.301 "raid_level": "raid1", 00:16:05.301 "superblock": true, 00:16:05.301 "num_base_bdevs": 2, 00:16:05.301 "num_base_bdevs_discovered": 1, 00:16:05.301 "num_base_bdevs_operational": 2, 00:16:05.301 "base_bdevs_list": [ 00:16:05.301 { 00:16:05.301 "name": "BaseBdev1", 00:16:05.301 "uuid": "1b52727c-53f3-49fe-8630-bf56ec614c2d", 00:16:05.301 "is_configured": true, 00:16:05.301 "data_offset": 2048, 00:16:05.301 "data_size": 63488 00:16:05.301 }, 00:16:05.301 { 00:16:05.301 "name": "BaseBdev2", 00:16:05.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.301 "is_configured": false, 00:16:05.301 "data_offset": 0, 00:16:05.301 "data_size": 0 00:16:05.301 } 00:16:05.301 ] 00:16:05.301 }' 00:16:05.301 10:29:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.301 10:29:59 -- common/autotest_common.sh@10 -- # set +x 00:16:05.868 10:29:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:06.127 [2024-07-12 10:29:59.945559] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.127 [2024-07-12 10:29:59.945604] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:06.127 10:29:59 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:06.127 10:29:59 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:06.385 10:30:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.643 BaseBdev1 00:16:06.643 10:30:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:06.643 10:30:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:06.643 10:30:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.643 10:30:00 -- common/autotest_common.sh@889 -- # local i 00:16:06.643 10:30:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.643 10:30:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.643 10:30:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.902 10:30:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.902 [ 00:16:06.902 { 00:16:06.902 "name": "BaseBdev1", 00:16:06.902 "aliases": [ 00:16:06.902 "e5a787c0-51c1-4d5c-8c81-3e73eae82e55" 00:16:06.902 ], 00:16:06.902 "product_name": "Malloc disk", 00:16:06.902 "block_size": 512, 00:16:06.902 "num_blocks": 65536, 00:16:06.902 "uuid": "e5a787c0-51c1-4d5c-8c81-3e73eae82e55", 00:16:06.902 "assigned_rate_limits": { 00:16:06.902 "rw_ios_per_sec": 0, 00:16:06.902 "rw_mbytes_per_sec": 0, 00:16:06.902 "r_mbytes_per_sec": 0, 00:16:06.902 "w_mbytes_per_sec": 0 00:16:06.902 }, 00:16:06.902 "claimed": false, 00:16:06.902 "zoned": false, 00:16:06.902 "supported_io_types": { 00:16:06.902 "read": true, 00:16:06.902 "write": true, 00:16:06.902 "unmap": true, 00:16:06.902 "write_zeroes": true, 00:16:06.902 "flush": true, 00:16:06.902 "reset": true, 00:16:06.902 "compare": false, 00:16:06.902 "compare_and_write": false, 00:16:06.902 "abort": true, 00:16:06.902 "nvme_admin": false, 00:16:06.902 "nvme_io": false 00:16:06.902 }, 00:16:06.902 "memory_domains": [ 00:16:06.902 { 00:16:06.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.902 "dma_device_type": 2 00:16:06.902 } 00:16:06.902 ], 00:16:06.902 "driver_specific": {} 00:16:06.902 } 00:16:06.902 ] 00:16:06.902 10:30:00 -- common/autotest_common.sh@895 -- # return 0 00:16:06.902 10:30:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:07.161 [2024-07-12 10:30:00.947005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.161 [2024-07-12 10:30:00.948783] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.161 [2024-07-12 10:30:00.948846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.161 10:30:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.420 10:30:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.420 "name": "Existed_Raid", 00:16:07.420 "uuid": "ba5885c0-453d-46b3-b2b4-32ec4f88af6f", 00:16:07.420 "strip_size_kb": 0, 00:16:07.420 "state": "configuring", 00:16:07.420 "raid_level": "raid1", 00:16:07.420 "superblock": true, 00:16:07.420 "num_base_bdevs": 2, 00:16:07.420 "num_base_bdevs_discovered": 1, 00:16:07.420 "num_base_bdevs_operational": 2, 00:16:07.420 "base_bdevs_list": [ 00:16:07.420 { 00:16:07.420 "name": "BaseBdev1", 00:16:07.420 "uuid": "e5a787c0-51c1-4d5c-8c81-3e73eae82e55", 00:16:07.420 "is_configured": true, 00:16:07.420 "data_offset": 2048, 00:16:07.420 "data_size": 63488 00:16:07.420 }, 00:16:07.420 { 00:16:07.420 "name": "BaseBdev2", 00:16:07.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.420 "is_configured": false, 00:16:07.420 "data_offset": 0, 00:16:07.420 "data_size": 0 00:16:07.420 } 00:16:07.420 ] 00:16:07.420 }' 00:16:07.420 10:30:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.420 10:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.987 10:30:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.245 [2024-07-12 10:30:02.117609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.245 [2024-07-12 10:30:02.117816] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:08.245 [2024-07-12 10:30:02.117831] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.245 BaseBdev2 00:16:08.245 [2024-07-12 10:30:02.117984] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:08.245 [2024-07-12 10:30:02.118348] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:08.245 [2024-07-12 10:30:02.118373] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:08.245 [2024-07-12 10:30:02.118522] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.245 10:30:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:08.245 10:30:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:08.245 10:30:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.245 10:30:02 -- common/autotest_common.sh@889 -- # local i 00:16:08.245 10:30:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.245 10:30:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.245 10:30:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.503 10:30:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.762 [ 00:16:08.762 { 00:16:08.762 "name": "BaseBdev2", 00:16:08.762 "aliases": [ 00:16:08.762 "6d08f724-184b-45bd-be0d-1275040dbecb" 00:16:08.762 ], 00:16:08.762 "product_name": "Malloc disk", 00:16:08.762 "block_size": 512, 00:16:08.762 "num_blocks": 65536, 00:16:08.762 "uuid": "6d08f724-184b-45bd-be0d-1275040dbecb", 00:16:08.762 "assigned_rate_limits": { 00:16:08.762 "rw_ios_per_sec": 0, 00:16:08.762 "rw_mbytes_per_sec": 0, 00:16:08.762 "r_mbytes_per_sec": 0, 00:16:08.762 "w_mbytes_per_sec": 0 00:16:08.762 }, 00:16:08.762 "claimed": true, 00:16:08.762 "claim_type": "exclusive_write", 00:16:08.762 "zoned": false, 00:16:08.762 "supported_io_types": { 00:16:08.762 "read": true, 00:16:08.762 "write": true, 00:16:08.762 "unmap": true, 00:16:08.762 "write_zeroes": true, 00:16:08.762 "flush": true, 00:16:08.762 "reset": true, 00:16:08.762 "compare": false, 00:16:08.762 "compare_and_write": false, 00:16:08.762 "abort": true, 00:16:08.762 "nvme_admin": false, 00:16:08.762 "nvme_io": false 00:16:08.762 }, 00:16:08.762 "memory_domains": [ 00:16:08.762 { 00:16:08.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.762 "dma_device_type": 2 00:16:08.762 } 00:16:08.762 ], 00:16:08.762 "driver_specific": {} 00:16:08.762 } 00:16:08.762 ] 00:16:08.762 10:30:02 -- common/autotest_common.sh@895 -- # return 0 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.762 "name": "Existed_Raid", 00:16:08.762 "uuid": "ba5885c0-453d-46b3-b2b4-32ec4f88af6f", 00:16:08.762 "strip_size_kb": 0, 00:16:08.762 "state": "online", 00:16:08.762 "raid_level": "raid1", 00:16:08.762 "superblock": true, 00:16:08.762 "num_base_bdevs": 2, 00:16:08.762 "num_base_bdevs_discovered": 2, 00:16:08.762 "num_base_bdevs_operational": 2, 00:16:08.762 "base_bdevs_list": [ 00:16:08.762 { 00:16:08.762 "name": "BaseBdev1", 00:16:08.762 "uuid": "e5a787c0-51c1-4d5c-8c81-3e73eae82e55", 00:16:08.762 "is_configured": true, 00:16:08.762 "data_offset": 2048, 00:16:08.762 "data_size": 63488 00:16:08.762 }, 00:16:08.762 { 00:16:08.762 "name": "BaseBdev2", 00:16:08.762 "uuid": "6d08f724-184b-45bd-be0d-1275040dbecb", 00:16:08.762 "is_configured": true, 00:16:08.762 "data_offset": 2048, 00:16:08.762 "data_size": 63488 00:16:08.762 } 00:16:08.762 ] 00:16:08.762 }' 00:16:08.762 10:30:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.762 10:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.696 [2024-07-12 10:30:03.529979] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.696 10:30:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.954 10:30:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.954 "name": "Existed_Raid", 00:16:09.954 "uuid": "ba5885c0-453d-46b3-b2b4-32ec4f88af6f", 00:16:09.954 "strip_size_kb": 0, 00:16:09.954 "state": "online", 00:16:09.954 "raid_level": "raid1", 00:16:09.954 "superblock": true, 00:16:09.954 "num_base_bdevs": 2, 00:16:09.954 "num_base_bdevs_discovered": 1, 00:16:09.954 "num_base_bdevs_operational": 1, 00:16:09.954 "base_bdevs_list": [ 00:16:09.954 { 00:16:09.954 "name": null, 00:16:09.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.954 "is_configured": false, 00:16:09.954 "data_offset": 2048, 00:16:09.954 "data_size": 63488 00:16:09.954 }, 00:16:09.954 { 00:16:09.954 "name": "BaseBdev2", 00:16:09.954 "uuid": "6d08f724-184b-45bd-be0d-1275040dbecb", 00:16:09.954 "is_configured": true, 00:16:09.954 "data_offset": 2048, 00:16:09.954 "data_size": 63488 00:16:09.954 } 00:16:09.954 ] 00:16:09.954 }' 00:16:09.954 10:30:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.954 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.889 10:30:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:11.148 [2024-07-12 10:30:04.897162] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:11.148 [2024-07-12 10:30:04.897195] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.148 [2024-07-12 10:30:04.897259] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.148 [2024-07-12 10:30:04.960649] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.148 [2024-07-12 10:30:04.960686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:11.148 10:30:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:11.148 10:30:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:11.148 10:30:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.148 10:30:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.407 10:30:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:11.407 10:30:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:11.407 10:30:05 -- bdev/bdev_raid.sh@287 -- # killprocess 117162 00:16:11.407 10:30:05 -- common/autotest_common.sh@926 -- # '[' -z 117162 ']' 00:16:11.407 10:30:05 -- common/autotest_common.sh@930 -- # kill -0 117162 00:16:11.407 10:30:05 -- common/autotest_common.sh@931 -- # uname 00:16:11.407 10:30:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:11.407 10:30:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117162 00:16:11.407 10:30:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:11.407 killing process with pid 117162 00:16:11.407 10:30:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:11.407 10:30:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117162' 00:16:11.407 10:30:05 -- common/autotest_common.sh@945 -- # kill 117162 00:16:11.407 10:30:05 -- common/autotest_common.sh@950 -- # wait 117162 00:16:11.407 [2024-07-12 10:30:05.220754] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.407 [2024-07-12 10:30:05.220872] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:12.342 ************************************ 00:16:12.342 END TEST raid_state_function_test_sb 00:16:12.342 ************************************ 00:16:12.342 00:16:12.342 real 0m10.423s 00:16:12.342 user 0m18.303s 00:16:12.342 sys 0m1.246s 00:16:12.342 10:30:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.342 10:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:12.342 10:30:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:12.342 10:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.342 10:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 ************************************ 00:16:12.342 START TEST raid_superblock_test 00:16:12.342 ************************************ 00:16:12.342 10:30:06 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=117504 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:12.342 10:30:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117504 /var/tmp/spdk-raid.sock 00:16:12.342 10:30:06 -- common/autotest_common.sh@819 -- # '[' -z 117504 ']' 00:16:12.342 10:30:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.342 10:30:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.342 10:30:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.342 10:30:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.342 10:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 [2024-07-12 10:30:06.246077] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:12.342 [2024-07-12 10:30:06.246222] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117504 ] 00:16:12.601 [2024-07-12 10:30:06.391223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.859 [2024-07-12 10:30:06.557202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.859 [2024-07-12 10:30:06.720526] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.426 10:30:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:13.426 10:30:07 -- common/autotest_common.sh@852 -- # return 0 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.426 10:30:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:13.684 malloc1 00:16:13.684 10:30:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.943 [2024-07-12 10:30:07.662943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.943 [2024-07-12 10:30:07.663020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.943 [2024-07-12 10:30:07.663050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:13.943 [2024-07-12 10:30:07.663093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.943 [2024-07-12 10:30:07.665382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.943 [2024-07-12 10:30:07.665427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.943 pt1 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.943 10:30:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:14.202 malloc2 00:16:14.202 10:30:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:14.461 [2024-07-12 10:30:08.158446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:14.461 [2024-07-12 10:30:08.158521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.461 [2024-07-12 10:30:08.158562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:14.461 [2024-07-12 10:30:08.158617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.461 [2024-07-12 10:30:08.160434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.461 [2024-07-12 10:30:08.160475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:14.461 pt2 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:14.461 [2024-07-12 10:30:08.342510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.461 [2024-07-12 10:30:08.344330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:14.461 [2024-07-12 10:30:08.344510] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:14.461 [2024-07-12 10:30:08.344524] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.461 [2024-07-12 10:30:08.344635] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:14.461 [2024-07-12 10:30:08.345000] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:14.461 [2024-07-12 10:30:08.345014] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:14.461 [2024-07-12 10:30:08.345145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.461 10:30:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.720 10:30:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.720 "name": "raid_bdev1", 00:16:14.720 "uuid": "cf01a04e-477b-4d81-8089-60ad4317e794", 00:16:14.720 "strip_size_kb": 0, 00:16:14.720 "state": "online", 00:16:14.720 "raid_level": "raid1", 00:16:14.720 "superblock": true, 00:16:14.720 "num_base_bdevs": 2, 00:16:14.720 "num_base_bdevs_discovered": 2, 00:16:14.720 "num_base_bdevs_operational": 2, 00:16:14.720 "base_bdevs_list": [ 00:16:14.720 { 00:16:14.720 "name": "pt1", 00:16:14.720 "uuid": "d40af494-8908-5089-8f1c-130339168b53", 00:16:14.720 "is_configured": true, 00:16:14.720 "data_offset": 2048, 00:16:14.720 "data_size": 63488 00:16:14.720 }, 00:16:14.720 { 00:16:14.720 "name": "pt2", 00:16:14.720 "uuid": "d2ca4723-bd09-540d-a4e0-0a06b1c8e010", 00:16:14.720 "is_configured": true, 00:16:14.720 "data_offset": 2048, 00:16:14.720 "data_size": 63488 00:16:14.720 } 00:16:14.720 ] 00:16:14.720 }' 00:16:14.720 10:30:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.720 10:30:08 -- common/autotest_common.sh@10 -- # set +x 00:16:15.287 10:30:09 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:15.287 10:30:09 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:15.546 [2024-07-12 10:30:09.370768] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.546 10:30:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cf01a04e-477b-4d81-8089-60ad4317e794 00:16:15.546 10:30:09 -- bdev/bdev_raid.sh@380 -- # '[' -z cf01a04e-477b-4d81-8089-60ad4317e794 ']' 00:16:15.546 10:30:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:15.805 [2024-07-12 10:30:09.630627] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.805 [2024-07-12 10:30:09.630650] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.805 [2024-07-12 10:30:09.630722] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.805 [2024-07-12 10:30:09.630776] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.805 [2024-07-12 10:30:09.630789] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:15.805 10:30:09 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.805 10:30:09 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:16.064 10:30:09 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:16.064 10:30:09 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:16.064 10:30:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.064 10:30:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:16.322 10:30:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.322 10:30:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:16.322 10:30:10 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:16.322 10:30:10 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:16.580 10:30:10 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:16.580 10:30:10 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:16.580 10:30:10 -- common/autotest_common.sh@640 -- # local es=0 00:16:16.580 10:30:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:16.580 10:30:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.580 10:30:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:16.580 10:30:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.580 10:30:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:16.580 10:30:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.580 10:30:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:16.580 10:30:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.580 10:30:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:16.580 10:30:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:16.839 [2024-07-12 10:30:10.578760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:16.839 [2024-07-12 10:30:10.580623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:16.839 [2024-07-12 10:30:10.580683] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:16.839 [2024-07-12 10:30:10.580731] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:16.839 [2024-07-12 10:30:10.580765] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.839 [2024-07-12 10:30:10.580775] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:16.839 request: 00:16:16.839 { 00:16:16.839 "name": "raid_bdev1", 00:16:16.839 "raid_level": "raid1", 00:16:16.839 "base_bdevs": [ 00:16:16.839 "malloc1", 00:16:16.839 "malloc2" 00:16:16.839 ], 00:16:16.839 "superblock": false, 00:16:16.839 "method": "bdev_raid_create", 00:16:16.839 "req_id": 1 00:16:16.839 } 00:16:16.839 Got JSON-RPC error response 00:16:16.839 response: 00:16:16.839 { 00:16:16.839 "code": -17, 00:16:16.839 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:16.839 } 00:16:16.839 10:30:10 -- common/autotest_common.sh@643 -- # es=1 00:16:16.839 10:30:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:16.839 10:30:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:16.839 10:30:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:16.839 10:30:10 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.839 10:30:10 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:17.097 10:30:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:17.097 10:30:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:17.097 10:30:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.356 [2024-07-12 10:30:11.014785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.356 [2024-07-12 10:30:11.014871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.356 [2024-07-12 10:30:11.014922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:17.356 [2024-07-12 10:30:11.014946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.356 [2024-07-12 10:30:11.017392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.356 [2024-07-12 10:30:11.017443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.356 [2024-07-12 10:30:11.017530] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:17.356 [2024-07-12 10:30:11.017639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.356 pt1 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.356 "name": "raid_bdev1", 00:16:17.356 "uuid": "cf01a04e-477b-4d81-8089-60ad4317e794", 00:16:17.356 "strip_size_kb": 0, 00:16:17.356 "state": "configuring", 00:16:17.356 "raid_level": "raid1", 00:16:17.356 "superblock": true, 00:16:17.356 "num_base_bdevs": 2, 00:16:17.356 "num_base_bdevs_discovered": 1, 00:16:17.356 "num_base_bdevs_operational": 2, 00:16:17.356 "base_bdevs_list": [ 00:16:17.356 { 00:16:17.356 "name": "pt1", 00:16:17.356 "uuid": "d40af494-8908-5089-8f1c-130339168b53", 00:16:17.356 "is_configured": true, 00:16:17.356 "data_offset": 2048, 00:16:17.356 "data_size": 63488 00:16:17.356 }, 00:16:17.356 { 00:16:17.356 "name": null, 00:16:17.356 "uuid": "d2ca4723-bd09-540d-a4e0-0a06b1c8e010", 00:16:17.356 "is_configured": false, 00:16:17.356 "data_offset": 2048, 00:16:17.356 "data_size": 63488 00:16:17.356 } 00:16:17.356 ] 00:16:17.356 }' 00:16:17.356 10:30:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.356 10:30:11 -- common/autotest_common.sh@10 -- # set +x 00:16:17.922 10:30:11 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:17.922 10:30:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:17.922 10:30:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:17.922 10:30:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.180 [2024-07-12 10:30:12.066966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.180 [2024-07-12 10:30:12.067042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.180 [2024-07-12 10:30:12.067075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:18.180 [2024-07-12 10:30:12.067101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.180 [2024-07-12 10:30:12.067510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.180 [2024-07-12 10:30:12.067559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.180 [2024-07-12 10:30:12.067639] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:18.180 [2024-07-12 10:30:12.067664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.180 [2024-07-12 10:30:12.067776] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:18.180 [2024-07-12 10:30:12.067788] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.180 [2024-07-12 10:30:12.067893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:18.180 [2024-07-12 10:30:12.068201] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:18.180 [2024-07-12 10:30:12.068223] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:18.180 [2024-07-12 10:30:12.068339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.180 pt2 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.180 10:30:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.437 10:30:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.437 "name": "raid_bdev1", 00:16:18.437 "uuid": "cf01a04e-477b-4d81-8089-60ad4317e794", 00:16:18.437 "strip_size_kb": 0, 00:16:18.437 "state": "online", 00:16:18.437 "raid_level": "raid1", 00:16:18.437 "superblock": true, 00:16:18.437 "num_base_bdevs": 2, 00:16:18.437 "num_base_bdevs_discovered": 2, 00:16:18.437 "num_base_bdevs_operational": 2, 00:16:18.437 "base_bdevs_list": [ 00:16:18.437 { 00:16:18.437 "name": "pt1", 00:16:18.437 "uuid": "d40af494-8908-5089-8f1c-130339168b53", 00:16:18.437 "is_configured": true, 00:16:18.437 "data_offset": 2048, 00:16:18.437 "data_size": 63488 00:16:18.437 }, 00:16:18.437 { 00:16:18.437 "name": "pt2", 00:16:18.438 "uuid": "d2ca4723-bd09-540d-a4e0-0a06b1c8e010", 00:16:18.438 "is_configured": true, 00:16:18.438 "data_offset": 2048, 00:16:18.438 "data_size": 63488 00:16:18.438 } 00:16:18.438 ] 00:16:18.438 }' 00:16:18.438 10:30:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.438 10:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.001 10:30:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.001 10:30:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:19.259 [2024-07-12 10:30:13.139302] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.259 10:30:13 -- bdev/bdev_raid.sh@430 -- # '[' cf01a04e-477b-4d81-8089-60ad4317e794 '!=' cf01a04e-477b-4d81-8089-60ad4317e794 ']' 00:16:19.259 10:30:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:19.259 10:30:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.259 10:30:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:19.259 10:30:13 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:19.517 [2024-07-12 10:30:13.335179] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.517 10:30:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.775 10:30:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.775 "name": "raid_bdev1", 00:16:19.775 "uuid": "cf01a04e-477b-4d81-8089-60ad4317e794", 00:16:19.775 "strip_size_kb": 0, 00:16:19.775 "state": "online", 00:16:19.775 "raid_level": "raid1", 00:16:19.775 "superblock": true, 00:16:19.775 "num_base_bdevs": 2, 00:16:19.775 "num_base_bdevs_discovered": 1, 00:16:19.775 "num_base_bdevs_operational": 1, 00:16:19.775 "base_bdevs_list": [ 00:16:19.775 { 00:16:19.775 "name": null, 00:16:19.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.775 "is_configured": false, 00:16:19.775 "data_offset": 2048, 00:16:19.775 "data_size": 63488 00:16:19.775 }, 00:16:19.775 { 00:16:19.775 "name": "pt2", 00:16:19.775 "uuid": "d2ca4723-bd09-540d-a4e0-0a06b1c8e010", 00:16:19.775 "is_configured": true, 00:16:19.775 "data_offset": 2048, 00:16:19.775 "data_size": 63488 00:16:19.775 } 00:16:19.775 ] 00:16:19.775 }' 00:16:19.775 10:30:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.775 10:30:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.341 10:30:14 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:20.600 [2024-07-12 10:30:14.467338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.600 [2024-07-12 10:30:14.467369] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.600 [2024-07-12 10:30:14.467412] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.600 [2024-07-12 10:30:14.467449] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.600 [2024-07-12 10:30:14.467459] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:20.600 10:30:14 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.600 10:30:14 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:20.927 10:30:14 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:20.927 10:30:14 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:20.927 10:30:14 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:20.927 10:30:14 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:20.927 10:30:14 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:21.204 10:30:14 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.204 [2024-07-12 10:30:15.083451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.204 [2024-07-12 10:30:15.083524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.204 [2024-07-12 10:30:15.083553] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:21.204 [2024-07-12 10:30:15.083582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.204 [2024-07-12 10:30:15.085704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.204 [2024-07-12 10:30:15.085756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.204 [2024-07-12 10:30:15.085836] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:21.204 [2024-07-12 10:30:15.085887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.204 [2024-07-12 10:30:15.085970] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:21.204 [2024-07-12 10:30:15.085982] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.204 [2024-07-12 10:30:15.086064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:21.204 [2024-07-12 10:30:15.086355] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:21.204 [2024-07-12 10:30:15.086377] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:21.204 [2024-07-12 10:30:15.086485] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.204 pt2 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.204 10:30:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.470 10:30:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.470 "name": "raid_bdev1", 00:16:21.470 "uuid": "cf01a04e-477b-4d81-8089-60ad4317e794", 00:16:21.470 "strip_size_kb": 0, 00:16:21.470 "state": "online", 00:16:21.470 "raid_level": "raid1", 00:16:21.470 "superblock": true, 00:16:21.470 "num_base_bdevs": 2, 00:16:21.470 "num_base_bdevs_discovered": 1, 00:16:21.470 "num_base_bdevs_operational": 1, 00:16:21.470 "base_bdevs_list": [ 00:16:21.470 { 00:16:21.470 "name": null, 00:16:21.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.470 "is_configured": false, 00:16:21.470 "data_offset": 2048, 00:16:21.470 "data_size": 63488 00:16:21.470 }, 00:16:21.470 { 00:16:21.470 "name": "pt2", 00:16:21.470 "uuid": "d2ca4723-bd09-540d-a4e0-0a06b1c8e010", 00:16:21.470 "is_configured": true, 00:16:21.470 "data_offset": 2048, 00:16:21.470 "data_size": 63488 00:16:21.470 } 00:16:21.470 ] 00:16:21.470 }' 00:16:21.470 10:30:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.470 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.403 10:30:15 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:22.403 10:30:15 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.403 10:30:15 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:22.403 [2024-07-12 10:30:16.172473] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.403 10:30:16 -- bdev/bdev_raid.sh@506 -- # '[' cf01a04e-477b-4d81-8089-60ad4317e794 '!=' cf01a04e-477b-4d81-8089-60ad4317e794 ']' 00:16:22.403 10:30:16 -- bdev/bdev_raid.sh@511 -- # killprocess 117504 00:16:22.403 10:30:16 -- common/autotest_common.sh@926 -- # '[' -z 117504 ']' 00:16:22.403 10:30:16 -- common/autotest_common.sh@930 -- # kill -0 117504 00:16:22.403 10:30:16 -- common/autotest_common.sh@931 -- # uname 00:16:22.403 10:30:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.403 10:30:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117504 00:16:22.403 killing process with pid 117504 00:16:22.403 10:30:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:22.403 10:30:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:22.403 10:30:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117504' 00:16:22.403 10:30:16 -- common/autotest_common.sh@945 -- # kill 117504 00:16:22.403 10:30:16 -- common/autotest_common.sh@950 -- # wait 117504 00:16:22.403 [2024-07-12 10:30:16.208417] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.403 [2024-07-12 10:30:16.208492] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.403 [2024-07-12 10:30:16.208580] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.403 [2024-07-12 10:30:16.208601] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:22.662 [2024-07-12 10:30:16.336609] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.596 ************************************ 00:16:23.596 END TEST raid_superblock_test 00:16:23.596 ************************************ 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:23.596 00:16:23.596 real 0m11.049s 00:16:23.596 user 0m19.705s 00:16:23.596 sys 0m1.308s 00:16:23.596 10:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.596 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:23.596 10:30:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:23.596 10:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:23.596 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:23.596 ************************************ 00:16:23.596 START TEST raid_state_function_test 00:16:23.596 ************************************ 00:16:23.596 10:30:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=117870 00:16:23.596 Process raid pid: 117870 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117870' 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:23.596 10:30:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117870 /var/tmp/spdk-raid.sock 00:16:23.596 10:30:17 -- common/autotest_common.sh@819 -- # '[' -z 117870 ']' 00:16:23.596 10:30:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:23.596 10:30:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:23.596 10:30:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:23.596 10:30:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.596 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:23.596 [2024-07-12 10:30:17.355074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:23.596 [2024-07-12 10:30:17.355262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.596 [2024-07-12 10:30:17.510704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.855 [2024-07-12 10:30:17.725023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.113 [2024-07-12 10:30:17.914040] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.680 10:30:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:24.680 10:30:18 -- common/autotest_common.sh@852 -- # return 0 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.680 [2024-07-12 10:30:18.457763] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.680 [2024-07-12 10:30:18.457860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.680 [2024-07-12 10:30:18.457875] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.680 [2024-07-12 10:30:18.457897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.680 [2024-07-12 10:30:18.457904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.680 [2024-07-12 10:30:18.457955] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.680 10:30:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.939 10:30:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.939 "name": "Existed_Raid", 00:16:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.939 "strip_size_kb": 64, 00:16:24.939 "state": "configuring", 00:16:24.939 "raid_level": "raid0", 00:16:24.939 "superblock": false, 00:16:24.939 "num_base_bdevs": 3, 00:16:24.939 "num_base_bdevs_discovered": 0, 00:16:24.939 "num_base_bdevs_operational": 3, 00:16:24.939 "base_bdevs_list": [ 00:16:24.939 { 00:16:24.939 "name": "BaseBdev1", 00:16:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.939 "is_configured": false, 00:16:24.939 "data_offset": 0, 00:16:24.939 "data_size": 0 00:16:24.939 }, 00:16:24.939 { 00:16:24.939 "name": "BaseBdev2", 00:16:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.939 "is_configured": false, 00:16:24.939 "data_offset": 0, 00:16:24.939 "data_size": 0 00:16:24.939 }, 00:16:24.939 { 00:16:24.939 "name": "BaseBdev3", 00:16:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.939 "is_configured": false, 00:16:24.939 "data_offset": 0, 00:16:24.939 "data_size": 0 00:16:24.939 } 00:16:24.939 ] 00:16:24.939 }' 00:16:24.939 10:30:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.939 10:30:18 -- common/autotest_common.sh@10 -- # set +x 00:16:25.505 10:30:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.763 [2024-07-12 10:30:19.449772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.763 [2024-07-12 10:30:19.449808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:25.763 10:30:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:25.763 [2024-07-12 10:30:19.621837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.763 [2024-07-12 10:30:19.621893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.763 [2024-07-12 10:30:19.621905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.763 [2024-07-12 10:30:19.621922] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.763 [2024-07-12 10:30:19.621929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.763 [2024-07-12 10:30:19.621960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.763 10:30:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.025 [2024-07-12 10:30:19.831374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.025 BaseBdev1 00:16:26.025 10:30:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:26.025 10:30:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:26.025 10:30:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:26.025 10:30:19 -- common/autotest_common.sh@889 -- # local i 00:16:26.025 10:30:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:26.025 10:30:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:26.025 10:30:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.283 10:30:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.541 [ 00:16:26.541 { 00:16:26.541 "name": "BaseBdev1", 00:16:26.541 "aliases": [ 00:16:26.541 "cd97cbc7-0690-4a95-8409-699aa7ddf7f0" 00:16:26.541 ], 00:16:26.541 "product_name": "Malloc disk", 00:16:26.541 "block_size": 512, 00:16:26.541 "num_blocks": 65536, 00:16:26.541 "uuid": "cd97cbc7-0690-4a95-8409-699aa7ddf7f0", 00:16:26.541 "assigned_rate_limits": { 00:16:26.541 "rw_ios_per_sec": 0, 00:16:26.541 "rw_mbytes_per_sec": 0, 00:16:26.541 "r_mbytes_per_sec": 0, 00:16:26.541 "w_mbytes_per_sec": 0 00:16:26.541 }, 00:16:26.541 "claimed": true, 00:16:26.541 "claim_type": "exclusive_write", 00:16:26.541 "zoned": false, 00:16:26.541 "supported_io_types": { 00:16:26.541 "read": true, 00:16:26.541 "write": true, 00:16:26.541 "unmap": true, 00:16:26.541 "write_zeroes": true, 00:16:26.541 "flush": true, 00:16:26.541 "reset": true, 00:16:26.541 "compare": false, 00:16:26.541 "compare_and_write": false, 00:16:26.541 "abort": true, 00:16:26.541 "nvme_admin": false, 00:16:26.541 "nvme_io": false 00:16:26.541 }, 00:16:26.541 "memory_domains": [ 00:16:26.541 { 00:16:26.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.541 "dma_device_type": 2 00:16:26.541 } 00:16:26.541 ], 00:16:26.541 "driver_specific": {} 00:16:26.541 } 00:16:26.541 ] 00:16:26.541 10:30:20 -- common/autotest_common.sh@895 -- # return 0 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.541 10:30:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.799 10:30:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.799 "name": "Existed_Raid", 00:16:26.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.799 "strip_size_kb": 64, 00:16:26.799 "state": "configuring", 00:16:26.799 "raid_level": "raid0", 00:16:26.799 "superblock": false, 00:16:26.799 "num_base_bdevs": 3, 00:16:26.799 "num_base_bdevs_discovered": 1, 00:16:26.799 "num_base_bdevs_operational": 3, 00:16:26.799 "base_bdevs_list": [ 00:16:26.799 { 00:16:26.799 "name": "BaseBdev1", 00:16:26.799 "uuid": "cd97cbc7-0690-4a95-8409-699aa7ddf7f0", 00:16:26.799 "is_configured": true, 00:16:26.799 "data_offset": 0, 00:16:26.799 "data_size": 65536 00:16:26.799 }, 00:16:26.799 { 00:16:26.799 "name": "BaseBdev2", 00:16:26.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.799 "is_configured": false, 00:16:26.799 "data_offset": 0, 00:16:26.799 "data_size": 0 00:16:26.799 }, 00:16:26.799 { 00:16:26.799 "name": "BaseBdev3", 00:16:26.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.799 "is_configured": false, 00:16:26.799 "data_offset": 0, 00:16:26.799 "data_size": 0 00:16:26.799 } 00:16:26.799 ] 00:16:26.799 }' 00:16:26.799 10:30:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.799 10:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:27.396 10:30:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.654 [2024-07-12 10:30:21.507692] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.654 [2024-07-12 10:30:21.507731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:27.654 10:30:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:27.654 10:30:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:27.912 [2024-07-12 10:30:21.679772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.912 [2024-07-12 10:30:21.681649] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.912 [2024-07-12 10:30:21.681707] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.912 [2024-07-12 10:30:21.681727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.912 [2024-07-12 10:30:21.681753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.912 10:30:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.169 10:30:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.169 "name": "Existed_Raid", 00:16:28.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.169 "strip_size_kb": 64, 00:16:28.169 "state": "configuring", 00:16:28.169 "raid_level": "raid0", 00:16:28.169 "superblock": false, 00:16:28.169 "num_base_bdevs": 3, 00:16:28.169 "num_base_bdevs_discovered": 1, 00:16:28.169 "num_base_bdevs_operational": 3, 00:16:28.169 "base_bdevs_list": [ 00:16:28.169 { 00:16:28.169 "name": "BaseBdev1", 00:16:28.169 "uuid": "cd97cbc7-0690-4a95-8409-699aa7ddf7f0", 00:16:28.169 "is_configured": true, 00:16:28.169 "data_offset": 0, 00:16:28.169 "data_size": 65536 00:16:28.169 }, 00:16:28.169 { 00:16:28.169 "name": "BaseBdev2", 00:16:28.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.169 "is_configured": false, 00:16:28.169 "data_offset": 0, 00:16:28.169 "data_size": 0 00:16:28.169 }, 00:16:28.169 { 00:16:28.169 "name": "BaseBdev3", 00:16:28.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.169 "is_configured": false, 00:16:28.169 "data_offset": 0, 00:16:28.169 "data_size": 0 00:16:28.169 } 00:16:28.169 ] 00:16:28.169 }' 00:16:28.169 10:30:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.169 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.735 10:30:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.994 [2024-07-12 10:30:22.808176] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.994 BaseBdev2 00:16:28.994 10:30:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:28.994 10:30:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:28.994 10:30:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:28.994 10:30:22 -- common/autotest_common.sh@889 -- # local i 00:16:28.994 10:30:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:28.994 10:30:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:28.994 10:30:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.254 10:30:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.512 [ 00:16:29.512 { 00:16:29.512 "name": "BaseBdev2", 00:16:29.512 "aliases": [ 00:16:29.512 "6da9f4ab-fd58-4bbf-9463-d33826258e05" 00:16:29.512 ], 00:16:29.512 "product_name": "Malloc disk", 00:16:29.512 "block_size": 512, 00:16:29.512 "num_blocks": 65536, 00:16:29.512 "uuid": "6da9f4ab-fd58-4bbf-9463-d33826258e05", 00:16:29.512 "assigned_rate_limits": { 00:16:29.512 "rw_ios_per_sec": 0, 00:16:29.512 "rw_mbytes_per_sec": 0, 00:16:29.512 "r_mbytes_per_sec": 0, 00:16:29.512 "w_mbytes_per_sec": 0 00:16:29.512 }, 00:16:29.512 "claimed": true, 00:16:29.512 "claim_type": "exclusive_write", 00:16:29.512 "zoned": false, 00:16:29.512 "supported_io_types": { 00:16:29.512 "read": true, 00:16:29.512 "write": true, 00:16:29.512 "unmap": true, 00:16:29.512 "write_zeroes": true, 00:16:29.512 "flush": true, 00:16:29.512 "reset": true, 00:16:29.512 "compare": false, 00:16:29.512 "compare_and_write": false, 00:16:29.512 "abort": true, 00:16:29.512 "nvme_admin": false, 00:16:29.512 "nvme_io": false 00:16:29.512 }, 00:16:29.512 "memory_domains": [ 00:16:29.512 { 00:16:29.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.512 "dma_device_type": 2 00:16:29.512 } 00:16:29.512 ], 00:16:29.512 "driver_specific": {} 00:16:29.512 } 00:16:29.512 ] 00:16:29.512 10:30:23 -- common/autotest_common.sh@895 -- # return 0 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.512 10:30:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.770 10:30:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.770 "name": "Existed_Raid", 00:16:29.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.770 "strip_size_kb": 64, 00:16:29.770 "state": "configuring", 00:16:29.770 "raid_level": "raid0", 00:16:29.770 "superblock": false, 00:16:29.770 "num_base_bdevs": 3, 00:16:29.770 "num_base_bdevs_discovered": 2, 00:16:29.770 "num_base_bdevs_operational": 3, 00:16:29.770 "base_bdevs_list": [ 00:16:29.770 { 00:16:29.770 "name": "BaseBdev1", 00:16:29.770 "uuid": "cd97cbc7-0690-4a95-8409-699aa7ddf7f0", 00:16:29.770 "is_configured": true, 00:16:29.770 "data_offset": 0, 00:16:29.770 "data_size": 65536 00:16:29.770 }, 00:16:29.770 { 00:16:29.770 "name": "BaseBdev2", 00:16:29.770 "uuid": "6da9f4ab-fd58-4bbf-9463-d33826258e05", 00:16:29.770 "is_configured": true, 00:16:29.770 "data_offset": 0, 00:16:29.771 "data_size": 65536 00:16:29.771 }, 00:16:29.771 { 00:16:29.771 "name": "BaseBdev3", 00:16:29.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.771 "is_configured": false, 00:16:29.771 "data_offset": 0, 00:16:29.771 "data_size": 0 00:16:29.771 } 00:16:29.771 ] 00:16:29.771 }' 00:16:29.771 10:30:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.771 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:30.337 10:30:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:30.594 [2024-07-12 10:30:24.316000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.594 [2024-07-12 10:30:24.316044] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:30.594 [2024-07-12 10:30:24.316054] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:30.594 [2024-07-12 10:30:24.316158] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:30.594 [2024-07-12 10:30:24.316836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:30.594 [2024-07-12 10:30:24.316858] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:30.594 BaseBdev3 00:16:30.594 [2024-07-12 10:30:24.317294] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.594 10:30:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:30.594 10:30:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:30.594 10:30:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:30.594 10:30:24 -- common/autotest_common.sh@889 -- # local i 00:16:30.594 10:30:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:30.594 10:30:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:30.594 10:30:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.854 10:30:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:30.854 [ 00:16:30.854 { 00:16:30.854 "name": "BaseBdev3", 00:16:30.854 "aliases": [ 00:16:30.854 "0c19c49b-63af-4b8e-aec0-923328020cdf" 00:16:30.854 ], 00:16:30.854 "product_name": "Malloc disk", 00:16:30.854 "block_size": 512, 00:16:30.854 "num_blocks": 65536, 00:16:30.854 "uuid": "0c19c49b-63af-4b8e-aec0-923328020cdf", 00:16:30.854 "assigned_rate_limits": { 00:16:30.854 "rw_ios_per_sec": 0, 00:16:30.854 "rw_mbytes_per_sec": 0, 00:16:30.854 "r_mbytes_per_sec": 0, 00:16:30.854 "w_mbytes_per_sec": 0 00:16:30.854 }, 00:16:30.854 "claimed": true, 00:16:30.854 "claim_type": "exclusive_write", 00:16:30.854 "zoned": false, 00:16:30.854 "supported_io_types": { 00:16:30.854 "read": true, 00:16:30.854 "write": true, 00:16:30.854 "unmap": true, 00:16:30.854 "write_zeroes": true, 00:16:30.854 "flush": true, 00:16:30.854 "reset": true, 00:16:30.854 "compare": false, 00:16:30.854 "compare_and_write": false, 00:16:30.854 "abort": true, 00:16:30.854 "nvme_admin": false, 00:16:30.854 "nvme_io": false 00:16:30.854 }, 00:16:30.854 "memory_domains": [ 00:16:30.854 { 00:16:30.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.854 "dma_device_type": 2 00:16:30.854 } 00:16:30.854 ], 00:16:30.854 "driver_specific": {} 00:16:30.854 } 00:16:30.854 ] 00:16:31.113 10:30:24 -- common/autotest_common.sh@895 -- # return 0 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.113 "name": "Existed_Raid", 00:16:31.113 "uuid": "0b1ca611-4837-41c9-abdd-b18431e0d18b", 00:16:31.113 "strip_size_kb": 64, 00:16:31.113 "state": "online", 00:16:31.113 "raid_level": "raid0", 00:16:31.113 "superblock": false, 00:16:31.113 "num_base_bdevs": 3, 00:16:31.113 "num_base_bdevs_discovered": 3, 00:16:31.113 "num_base_bdevs_operational": 3, 00:16:31.113 "base_bdevs_list": [ 00:16:31.113 { 00:16:31.113 "name": "BaseBdev1", 00:16:31.113 "uuid": "cd97cbc7-0690-4a95-8409-699aa7ddf7f0", 00:16:31.113 "is_configured": true, 00:16:31.113 "data_offset": 0, 00:16:31.113 "data_size": 65536 00:16:31.113 }, 00:16:31.113 { 00:16:31.113 "name": "BaseBdev2", 00:16:31.113 "uuid": "6da9f4ab-fd58-4bbf-9463-d33826258e05", 00:16:31.113 "is_configured": true, 00:16:31.113 "data_offset": 0, 00:16:31.113 "data_size": 65536 00:16:31.113 }, 00:16:31.113 { 00:16:31.113 "name": "BaseBdev3", 00:16:31.113 "uuid": "0c19c49b-63af-4b8e-aec0-923328020cdf", 00:16:31.113 "is_configured": true, 00:16:31.113 "data_offset": 0, 00:16:31.113 "data_size": 65536 00:16:31.113 } 00:16:31.113 ] 00:16:31.113 }' 00:16:31.113 10:30:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.113 10:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:31.681 10:30:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:31.940 [2024-07-12 10:30:25.851821] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.940 [2024-07-12 10:30:25.851853] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.940 [2024-07-12 10:30:25.851918] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.199 10:30:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.457 10:30:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.457 "name": "Existed_Raid", 00:16:32.457 "uuid": "0b1ca611-4837-41c9-abdd-b18431e0d18b", 00:16:32.457 "strip_size_kb": 64, 00:16:32.457 "state": "offline", 00:16:32.457 "raid_level": "raid0", 00:16:32.457 "superblock": false, 00:16:32.457 "num_base_bdevs": 3, 00:16:32.457 "num_base_bdevs_discovered": 2, 00:16:32.457 "num_base_bdevs_operational": 2, 00:16:32.457 "base_bdevs_list": [ 00:16:32.457 { 00:16:32.457 "name": null, 00:16:32.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.457 "is_configured": false, 00:16:32.457 "data_offset": 0, 00:16:32.457 "data_size": 65536 00:16:32.457 }, 00:16:32.457 { 00:16:32.457 "name": "BaseBdev2", 00:16:32.457 "uuid": "6da9f4ab-fd58-4bbf-9463-d33826258e05", 00:16:32.457 "is_configured": true, 00:16:32.457 "data_offset": 0, 00:16:32.457 "data_size": 65536 00:16:32.457 }, 00:16:32.457 { 00:16:32.457 "name": "BaseBdev3", 00:16:32.457 "uuid": "0c19c49b-63af-4b8e-aec0-923328020cdf", 00:16:32.457 "is_configured": true, 00:16:32.457 "data_offset": 0, 00:16:32.458 "data_size": 65536 00:16:32.458 } 00:16:32.458 ] 00:16:32.458 }' 00:16:32.458 10:30:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.458 10:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:33.025 10:30:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:33.025 10:30:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:33.025 10:30:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.025 10:30:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:33.284 10:30:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:33.284 10:30:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.284 10:30:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:33.543 [2024-07-12 10:30:27.335702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.543 10:30:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:33.543 10:30:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:33.543 10:30:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.543 10:30:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:33.802 10:30:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:33.802 10:30:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.802 10:30:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:34.061 [2024-07-12 10:30:27.806053] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.061 [2024-07-12 10:30:27.806119] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:34.061 10:30:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:34.061 10:30:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:34.061 10:30:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.061 10:30:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:34.320 10:30:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:34.320 10:30:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:34.320 10:30:28 -- bdev/bdev_raid.sh@287 -- # killprocess 117870 00:16:34.320 10:30:28 -- common/autotest_common.sh@926 -- # '[' -z 117870 ']' 00:16:34.320 10:30:28 -- common/autotest_common.sh@930 -- # kill -0 117870 00:16:34.320 10:30:28 -- common/autotest_common.sh@931 -- # uname 00:16:34.320 10:30:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:34.320 10:30:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117870 00:16:34.320 10:30:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:34.320 10:30:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:34.320 10:30:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117870' 00:16:34.320 killing process with pid 117870 00:16:34.320 10:30:28 -- common/autotest_common.sh@945 -- # kill 117870 00:16:34.320 10:30:28 -- common/autotest_common.sh@950 -- # wait 117870 00:16:34.320 [2024-07-12 10:30:28.137957] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.320 [2024-07-12 10:30:28.138068] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.255 ************************************ 00:16:35.255 END TEST raid_state_function_test 00:16:35.255 ************************************ 00:16:35.255 10:30:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:35.255 00:16:35.255 real 0m11.851s 00:16:35.255 user 0m21.068s 00:16:35.255 sys 0m1.351s 00:16:35.255 10:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.255 10:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:35.513 10:30:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:35.513 10:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:35.513 10:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:35.513 ************************************ 00:16:35.513 START TEST raid_state_function_test_sb 00:16:35.513 ************************************ 00:16:35.513 10:30:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=118274 00:16:35.513 Process raid pid: 118274 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118274' 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118274 /var/tmp/spdk-raid.sock 00:16:35.513 10:30:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:35.513 10:30:29 -- common/autotest_common.sh@819 -- # '[' -z 118274 ']' 00:16:35.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:35.513 10:30:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:35.513 10:30:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:35.513 10:30:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:35.513 10:30:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:35.513 10:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:35.513 [2024-07-12 10:30:29.278331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:35.513 [2024-07-12 10:30:29.278550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.772 [2024-07-12 10:30:29.451625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.772 [2024-07-12 10:30:29.680252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.030 [2024-07-12 10:30:29.852707] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.289 10:30:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:36.289 10:30:30 -- common/autotest_common.sh@852 -- # return 0 00:16:36.289 10:30:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:36.547 [2024-07-12 10:30:30.404214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.547 [2024-07-12 10:30:30.404396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.547 [2024-07-12 10:30:30.404495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.547 [2024-07-12 10:30:30.404548] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.547 [2024-07-12 10:30:30.404575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.547 [2024-07-12 10:30:30.404704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.547 10:30:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.548 10:30:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.548 10:30:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.548 10:30:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.806 10:30:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.806 "name": "Existed_Raid", 00:16:36.806 "uuid": "46ee8b13-ca8b-4f80-846f-0232675a7794", 00:16:36.806 "strip_size_kb": 64, 00:16:36.806 "state": "configuring", 00:16:36.806 "raid_level": "raid0", 00:16:36.806 "superblock": true, 00:16:36.806 "num_base_bdevs": 3, 00:16:36.806 "num_base_bdevs_discovered": 0, 00:16:36.806 "num_base_bdevs_operational": 3, 00:16:36.806 "base_bdevs_list": [ 00:16:36.806 { 00:16:36.806 "name": "BaseBdev1", 00:16:36.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.806 "is_configured": false, 00:16:36.806 "data_offset": 0, 00:16:36.806 "data_size": 0 00:16:36.806 }, 00:16:36.806 { 00:16:36.806 "name": "BaseBdev2", 00:16:36.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.806 "is_configured": false, 00:16:36.806 "data_offset": 0, 00:16:36.806 "data_size": 0 00:16:36.806 }, 00:16:36.806 { 00:16:36.806 "name": "BaseBdev3", 00:16:36.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.806 "is_configured": false, 00:16:36.806 "data_offset": 0, 00:16:36.806 "data_size": 0 00:16:36.806 } 00:16:36.806 ] 00:16:36.806 }' 00:16:36.806 10:30:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.806 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:37.374 10:30:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:37.633 [2024-07-12 10:30:31.484182] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.633 [2024-07-12 10:30:31.484327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:37.633 10:30:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:37.891 [2024-07-12 10:30:31.744278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.891 [2024-07-12 10:30:31.744440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.891 [2024-07-12 10:30:31.744558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.891 [2024-07-12 10:30:31.744611] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.891 [2024-07-12 10:30:31.744637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.891 [2024-07-12 10:30:31.744752] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.891 10:30:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.150 [2024-07-12 10:30:31.961386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.150 BaseBdev1 00:16:38.150 10:30:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:38.150 10:30:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:38.150 10:30:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:38.150 10:30:31 -- common/autotest_common.sh@889 -- # local i 00:16:38.150 10:30:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:38.150 10:30:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:38.150 10:30:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.408 10:30:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.666 [ 00:16:38.666 { 00:16:38.666 "name": "BaseBdev1", 00:16:38.666 "aliases": [ 00:16:38.666 "3c840b5c-f65e-4a95-ac31-11519aaaf8fd" 00:16:38.666 ], 00:16:38.666 "product_name": "Malloc disk", 00:16:38.666 "block_size": 512, 00:16:38.666 "num_blocks": 65536, 00:16:38.666 "uuid": "3c840b5c-f65e-4a95-ac31-11519aaaf8fd", 00:16:38.666 "assigned_rate_limits": { 00:16:38.666 "rw_ios_per_sec": 0, 00:16:38.666 "rw_mbytes_per_sec": 0, 00:16:38.666 "r_mbytes_per_sec": 0, 00:16:38.666 "w_mbytes_per_sec": 0 00:16:38.666 }, 00:16:38.666 "claimed": true, 00:16:38.666 "claim_type": "exclusive_write", 00:16:38.666 "zoned": false, 00:16:38.666 "supported_io_types": { 00:16:38.666 "read": true, 00:16:38.666 "write": true, 00:16:38.666 "unmap": true, 00:16:38.666 "write_zeroes": true, 00:16:38.666 "flush": true, 00:16:38.666 "reset": true, 00:16:38.666 "compare": false, 00:16:38.666 "compare_and_write": false, 00:16:38.666 "abort": true, 00:16:38.666 "nvme_admin": false, 00:16:38.666 "nvme_io": false 00:16:38.666 }, 00:16:38.666 "memory_domains": [ 00:16:38.666 { 00:16:38.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.666 "dma_device_type": 2 00:16:38.666 } 00:16:38.666 ], 00:16:38.666 "driver_specific": {} 00:16:38.666 } 00:16:38.666 ] 00:16:38.666 10:30:32 -- common/autotest_common.sh@895 -- # return 0 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.666 10:30:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.666 "name": "Existed_Raid", 00:16:38.666 "uuid": "450eb59a-c1cc-47e4-af7a-e20f0b51d6f5", 00:16:38.666 "strip_size_kb": 64, 00:16:38.666 "state": "configuring", 00:16:38.666 "raid_level": "raid0", 00:16:38.666 "superblock": true, 00:16:38.666 "num_base_bdevs": 3, 00:16:38.666 "num_base_bdevs_discovered": 1, 00:16:38.666 "num_base_bdevs_operational": 3, 00:16:38.667 "base_bdevs_list": [ 00:16:38.667 { 00:16:38.667 "name": "BaseBdev1", 00:16:38.667 "uuid": "3c840b5c-f65e-4a95-ac31-11519aaaf8fd", 00:16:38.667 "is_configured": true, 00:16:38.667 "data_offset": 2048, 00:16:38.667 "data_size": 63488 00:16:38.667 }, 00:16:38.667 { 00:16:38.667 "name": "BaseBdev2", 00:16:38.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.667 "is_configured": false, 00:16:38.667 "data_offset": 0, 00:16:38.667 "data_size": 0 00:16:38.667 }, 00:16:38.667 { 00:16:38.667 "name": "BaseBdev3", 00:16:38.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.667 "is_configured": false, 00:16:38.667 "data_offset": 0, 00:16:38.667 "data_size": 0 00:16:38.667 } 00:16:38.667 ] 00:16:38.667 }' 00:16:38.667 10:30:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.667 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:39.601 10:30:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.601 [2024-07-12 10:30:33.461639] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.601 [2024-07-12 10:30:33.461790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:39.601 10:30:33 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:39.601 10:30:33 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:40.166 10:30:33 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.166 BaseBdev1 00:16:40.166 10:30:34 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:40.166 10:30:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:40.166 10:30:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:40.166 10:30:34 -- common/autotest_common.sh@889 -- # local i 00:16:40.166 10:30:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:40.166 10:30:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:40.166 10:30:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.424 10:30:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.683 [ 00:16:40.683 { 00:16:40.683 "name": "BaseBdev1", 00:16:40.683 "aliases": [ 00:16:40.683 "14e8f9e1-3a6d-4a44-936c-eb85b28f3cb1" 00:16:40.683 ], 00:16:40.683 "product_name": "Malloc disk", 00:16:40.683 "block_size": 512, 00:16:40.683 "num_blocks": 65536, 00:16:40.683 "uuid": "14e8f9e1-3a6d-4a44-936c-eb85b28f3cb1", 00:16:40.683 "assigned_rate_limits": { 00:16:40.683 "rw_ios_per_sec": 0, 00:16:40.683 "rw_mbytes_per_sec": 0, 00:16:40.683 "r_mbytes_per_sec": 0, 00:16:40.683 "w_mbytes_per_sec": 0 00:16:40.683 }, 00:16:40.683 "claimed": false, 00:16:40.683 "zoned": false, 00:16:40.683 "supported_io_types": { 00:16:40.683 "read": true, 00:16:40.683 "write": true, 00:16:40.683 "unmap": true, 00:16:40.683 "write_zeroes": true, 00:16:40.683 "flush": true, 00:16:40.683 "reset": true, 00:16:40.683 "compare": false, 00:16:40.683 "compare_and_write": false, 00:16:40.683 "abort": true, 00:16:40.683 "nvme_admin": false, 00:16:40.683 "nvme_io": false 00:16:40.683 }, 00:16:40.683 "memory_domains": [ 00:16:40.683 { 00:16:40.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.683 "dma_device_type": 2 00:16:40.683 } 00:16:40.683 ], 00:16:40.683 "driver_specific": {} 00:16:40.683 } 00:16:40.683 ] 00:16:40.683 10:30:34 -- common/autotest_common.sh@895 -- # return 0 00:16:40.683 10:30:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.942 [2024-07-12 10:30:34.605911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.942 [2024-07-12 10:30:34.607828] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.942 [2024-07-12 10:30:34.607989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.942 [2024-07-12 10:30:34.608086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.942 [2024-07-12 10:30:34.608144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.942 10:30:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.200 10:30:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.200 "name": "Existed_Raid", 00:16:41.200 "uuid": "d8e64a2c-84ca-49c4-976a-8deb0698f422", 00:16:41.200 "strip_size_kb": 64, 00:16:41.200 "state": "configuring", 00:16:41.200 "raid_level": "raid0", 00:16:41.200 "superblock": true, 00:16:41.200 "num_base_bdevs": 3, 00:16:41.200 "num_base_bdevs_discovered": 1, 00:16:41.200 "num_base_bdevs_operational": 3, 00:16:41.200 "base_bdevs_list": [ 00:16:41.200 { 00:16:41.200 "name": "BaseBdev1", 00:16:41.200 "uuid": "14e8f9e1-3a6d-4a44-936c-eb85b28f3cb1", 00:16:41.200 "is_configured": true, 00:16:41.200 "data_offset": 2048, 00:16:41.200 "data_size": 63488 00:16:41.200 }, 00:16:41.200 { 00:16:41.200 "name": "BaseBdev2", 00:16:41.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.200 "is_configured": false, 00:16:41.200 "data_offset": 0, 00:16:41.200 "data_size": 0 00:16:41.200 }, 00:16:41.200 { 00:16:41.200 "name": "BaseBdev3", 00:16:41.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.200 "is_configured": false, 00:16:41.200 "data_offset": 0, 00:16:41.200 "data_size": 0 00:16:41.200 } 00:16:41.200 ] 00:16:41.200 }' 00:16:41.200 10:30:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.200 10:30:34 -- common/autotest_common.sh@10 -- # set +x 00:16:41.767 10:30:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:42.026 [2024-07-12 10:30:35.759883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.026 BaseBdev2 00:16:42.026 10:30:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:42.026 10:30:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:42.026 10:30:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:42.026 10:30:35 -- common/autotest_common.sh@889 -- # local i 00:16:42.027 10:30:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:42.027 10:30:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:42.027 10:30:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.285 10:30:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.285 [ 00:16:42.285 { 00:16:42.285 "name": "BaseBdev2", 00:16:42.285 "aliases": [ 00:16:42.285 "937457a3-6141-45e8-a6ab-2b6e5bb11e94" 00:16:42.285 ], 00:16:42.285 "product_name": "Malloc disk", 00:16:42.285 "block_size": 512, 00:16:42.285 "num_blocks": 65536, 00:16:42.285 "uuid": "937457a3-6141-45e8-a6ab-2b6e5bb11e94", 00:16:42.285 "assigned_rate_limits": { 00:16:42.285 "rw_ios_per_sec": 0, 00:16:42.285 "rw_mbytes_per_sec": 0, 00:16:42.285 "r_mbytes_per_sec": 0, 00:16:42.285 "w_mbytes_per_sec": 0 00:16:42.285 }, 00:16:42.285 "claimed": true, 00:16:42.285 "claim_type": "exclusive_write", 00:16:42.285 "zoned": false, 00:16:42.286 "supported_io_types": { 00:16:42.286 "read": true, 00:16:42.286 "write": true, 00:16:42.286 "unmap": true, 00:16:42.286 "write_zeroes": true, 00:16:42.286 "flush": true, 00:16:42.286 "reset": true, 00:16:42.286 "compare": false, 00:16:42.286 "compare_and_write": false, 00:16:42.286 "abort": true, 00:16:42.286 "nvme_admin": false, 00:16:42.286 "nvme_io": false 00:16:42.286 }, 00:16:42.286 "memory_domains": [ 00:16:42.286 { 00:16:42.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.286 "dma_device_type": 2 00:16:42.286 } 00:16:42.286 ], 00:16:42.286 "driver_specific": {} 00:16:42.286 } 00:16:42.286 ] 00:16:42.545 10:30:36 -- common/autotest_common.sh@895 -- # return 0 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.545 "name": "Existed_Raid", 00:16:42.545 "uuid": "d8e64a2c-84ca-49c4-976a-8deb0698f422", 00:16:42.545 "strip_size_kb": 64, 00:16:42.545 "state": "configuring", 00:16:42.545 "raid_level": "raid0", 00:16:42.545 "superblock": true, 00:16:42.545 "num_base_bdevs": 3, 00:16:42.545 "num_base_bdevs_discovered": 2, 00:16:42.545 "num_base_bdevs_operational": 3, 00:16:42.545 "base_bdevs_list": [ 00:16:42.545 { 00:16:42.545 "name": "BaseBdev1", 00:16:42.545 "uuid": "14e8f9e1-3a6d-4a44-936c-eb85b28f3cb1", 00:16:42.545 "is_configured": true, 00:16:42.545 "data_offset": 2048, 00:16:42.545 "data_size": 63488 00:16:42.545 }, 00:16:42.545 { 00:16:42.545 "name": "BaseBdev2", 00:16:42.545 "uuid": "937457a3-6141-45e8-a6ab-2b6e5bb11e94", 00:16:42.545 "is_configured": true, 00:16:42.545 "data_offset": 2048, 00:16:42.545 "data_size": 63488 00:16:42.545 }, 00:16:42.545 { 00:16:42.545 "name": "BaseBdev3", 00:16:42.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.545 "is_configured": false, 00:16:42.545 "data_offset": 0, 00:16:42.545 "data_size": 0 00:16:42.545 } 00:16:42.545 ] 00:16:42.545 }' 00:16:42.545 10:30:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.545 10:30:36 -- common/autotest_common.sh@10 -- # set +x 00:16:43.481 10:30:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.481 [2024-07-12 10:30:37.351963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.481 BaseBdev3 00:16:43.481 [2024-07-12 10:30:37.352194] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:43.481 [2024-07-12 10:30:37.352209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:43.481 [2024-07-12 10:30:37.352372] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:43.481 [2024-07-12 10:30:37.352753] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:43.481 [2024-07-12 10:30:37.352768] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:43.481 [2024-07-12 10:30:37.352924] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.481 10:30:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:43.481 10:30:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:43.481 10:30:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:43.481 10:30:37 -- common/autotest_common.sh@889 -- # local i 00:16:43.481 10:30:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:43.481 10:30:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:43.481 10:30:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.740 10:30:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.998 [ 00:16:43.998 { 00:16:43.998 "name": "BaseBdev3", 00:16:43.998 "aliases": [ 00:16:43.998 "7b1db40c-d73e-499a-a256-eea672e961d9" 00:16:43.998 ], 00:16:43.998 "product_name": "Malloc disk", 00:16:43.998 "block_size": 512, 00:16:43.998 "num_blocks": 65536, 00:16:43.998 "uuid": "7b1db40c-d73e-499a-a256-eea672e961d9", 00:16:43.998 "assigned_rate_limits": { 00:16:43.998 "rw_ios_per_sec": 0, 00:16:43.998 "rw_mbytes_per_sec": 0, 00:16:43.998 "r_mbytes_per_sec": 0, 00:16:43.998 "w_mbytes_per_sec": 0 00:16:43.998 }, 00:16:43.998 "claimed": true, 00:16:43.998 "claim_type": "exclusive_write", 00:16:43.998 "zoned": false, 00:16:43.998 "supported_io_types": { 00:16:43.998 "read": true, 00:16:43.998 "write": true, 00:16:43.998 "unmap": true, 00:16:43.998 "write_zeroes": true, 00:16:43.998 "flush": true, 00:16:43.998 "reset": true, 00:16:43.998 "compare": false, 00:16:43.998 "compare_and_write": false, 00:16:43.998 "abort": true, 00:16:43.998 "nvme_admin": false, 00:16:43.998 "nvme_io": false 00:16:43.998 }, 00:16:43.998 "memory_domains": [ 00:16:43.998 { 00:16:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.998 "dma_device_type": 2 00:16:43.998 } 00:16:43.998 ], 00:16:43.998 "driver_specific": {} 00:16:43.998 } 00:16:43.998 ] 00:16:43.998 10:30:37 -- common/autotest_common.sh@895 -- # return 0 00:16:43.998 10:30:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:43.998 10:30:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.999 10:30:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.257 10:30:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.257 "name": "Existed_Raid", 00:16:44.257 "uuid": "d8e64a2c-84ca-49c4-976a-8deb0698f422", 00:16:44.257 "strip_size_kb": 64, 00:16:44.257 "state": "online", 00:16:44.257 "raid_level": "raid0", 00:16:44.257 "superblock": true, 00:16:44.257 "num_base_bdevs": 3, 00:16:44.257 "num_base_bdevs_discovered": 3, 00:16:44.257 "num_base_bdevs_operational": 3, 00:16:44.257 "base_bdevs_list": [ 00:16:44.257 { 00:16:44.257 "name": "BaseBdev1", 00:16:44.257 "uuid": "14e8f9e1-3a6d-4a44-936c-eb85b28f3cb1", 00:16:44.257 "is_configured": true, 00:16:44.257 "data_offset": 2048, 00:16:44.257 "data_size": 63488 00:16:44.257 }, 00:16:44.257 { 00:16:44.257 "name": "BaseBdev2", 00:16:44.257 "uuid": "937457a3-6141-45e8-a6ab-2b6e5bb11e94", 00:16:44.257 "is_configured": true, 00:16:44.257 "data_offset": 2048, 00:16:44.257 "data_size": 63488 00:16:44.257 }, 00:16:44.257 { 00:16:44.257 "name": "BaseBdev3", 00:16:44.257 "uuid": "7b1db40c-d73e-499a-a256-eea672e961d9", 00:16:44.257 "is_configured": true, 00:16:44.257 "data_offset": 2048, 00:16:44.257 "data_size": 63488 00:16:44.257 } 00:16:44.257 ] 00:16:44.257 }' 00:16:44.257 10:30:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.257 10:30:38 -- common/autotest_common.sh@10 -- # set +x 00:16:45.193 10:30:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.193 [2024-07-12 10:30:38.996660] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.193 [2024-07-12 10:30:38.996693] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.193 [2024-07-12 10:30:38.996764] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.194 10:30:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.453 10:30:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.453 "name": "Existed_Raid", 00:16:45.453 "uuid": "d8e64a2c-84ca-49c4-976a-8deb0698f422", 00:16:45.453 "strip_size_kb": 64, 00:16:45.453 "state": "offline", 00:16:45.453 "raid_level": "raid0", 00:16:45.453 "superblock": true, 00:16:45.453 "num_base_bdevs": 3, 00:16:45.453 "num_base_bdevs_discovered": 2, 00:16:45.453 "num_base_bdevs_operational": 2, 00:16:45.453 "base_bdevs_list": [ 00:16:45.453 { 00:16:45.453 "name": null, 00:16:45.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.453 "is_configured": false, 00:16:45.453 "data_offset": 2048, 00:16:45.453 "data_size": 63488 00:16:45.453 }, 00:16:45.453 { 00:16:45.453 "name": "BaseBdev2", 00:16:45.453 "uuid": "937457a3-6141-45e8-a6ab-2b6e5bb11e94", 00:16:45.453 "is_configured": true, 00:16:45.453 "data_offset": 2048, 00:16:45.453 "data_size": 63488 00:16:45.453 }, 00:16:45.453 { 00:16:45.453 "name": "BaseBdev3", 00:16:45.453 "uuid": "7b1db40c-d73e-499a-a256-eea672e961d9", 00:16:45.453 "is_configured": true, 00:16:45.453 "data_offset": 2048, 00:16:45.453 "data_size": 63488 00:16:45.453 } 00:16:45.453 ] 00:16:45.453 }' 00:16:45.453 10:30:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.453 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:16:46.388 10:30:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:46.388 10:30:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:46.388 10:30:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.388 10:30:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:46.388 10:30:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:46.388 10:30:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.388 10:30:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:46.647 [2024-07-12 10:30:40.403480] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.647 10:30:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:46.647 10:30:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:46.647 10:30:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.647 10:30:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:46.905 10:30:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:46.905 10:30:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.905 10:30:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:47.164 [2024-07-12 10:30:40.944582] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.164 [2024-07-12 10:30:40.944650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:47.164 10:30:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:47.164 10:30:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.164 10:30:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.164 10:30:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:47.423 10:30:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:47.423 10:30:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:47.423 10:30:41 -- bdev/bdev_raid.sh@287 -- # killprocess 118274 00:16:47.423 10:30:41 -- common/autotest_common.sh@926 -- # '[' -z 118274 ']' 00:16:47.423 10:30:41 -- common/autotest_common.sh@930 -- # kill -0 118274 00:16:47.423 10:30:41 -- common/autotest_common.sh@931 -- # uname 00:16:47.423 10:30:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:47.423 10:30:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118274 00:16:47.423 killing process with pid 118274 00:16:47.423 10:30:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:47.423 10:30:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:47.423 10:30:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118274' 00:16:47.423 10:30:41 -- common/autotest_common.sh@945 -- # kill 118274 00:16:47.423 10:30:41 -- common/autotest_common.sh@950 -- # wait 118274 00:16:47.423 [2024-07-12 10:30:41.238844] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.423 [2024-07-12 10:30:41.238946] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.358 ************************************ 00:16:48.358 END TEST raid_state_function_test_sb 00:16:48.358 ************************************ 00:16:48.358 10:30:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:48.358 00:16:48.358 real 0m13.041s 00:16:48.358 user 0m23.205s 00:16:48.358 sys 0m1.481s 00:16:48.358 10:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.358 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:48.616 10:30:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:48.616 10:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:48.616 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:48.616 ************************************ 00:16:48.616 START TEST raid_superblock_test 00:16:48.616 ************************************ 00:16:48.616 10:30:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=118688 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118688 /var/tmp/spdk-raid.sock 00:16:48.616 10:30:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:48.616 10:30:42 -- common/autotest_common.sh@819 -- # '[' -z 118688 ']' 00:16:48.616 10:30:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:48.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:48.616 10:30:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.616 10:30:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:48.616 10:30:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.616 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:48.616 [2024-07-12 10:30:42.383421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:48.616 [2024-07-12 10:30:42.383628] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118688 ] 00:16:48.874 [2024-07-12 10:30:42.556134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.874 [2024-07-12 10:30:42.773756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.131 [2024-07-12 10:30:42.940352] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.389 10:30:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.389 10:30:43 -- common/autotest_common.sh@852 -- # return 0 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.389 10:30:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:49.651 malloc1 00:16:49.651 10:30:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.924 [2024-07-12 10:30:43.700029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.924 [2024-07-12 10:30:43.700219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.924 [2024-07-12 10:30:43.700289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:49.924 [2024-07-12 10:30:43.700425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.924 [2024-07-12 10:30:43.702652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.924 [2024-07-12 10:30:43.702824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.924 pt1 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.924 10:30:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:50.200 malloc2 00:16:50.200 10:30:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.469 [2024-07-12 10:30:44.238498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.469 [2024-07-12 10:30:44.238683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.469 [2024-07-12 10:30:44.238756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:50.469 [2024-07-12 10:30:44.238895] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.469 [2024-07-12 10:30:44.241186] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.469 [2024-07-12 10:30:44.241362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.469 pt2 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.469 10:30:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:50.728 malloc3 00:16:50.728 10:30:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.728 [2024-07-12 10:30:44.623626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.728 [2024-07-12 10:30:44.623793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.728 [2024-07-12 10:30:44.623863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:50.728 [2024-07-12 10:30:44.623986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.728 [2024-07-12 10:30:44.626197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.728 [2024-07-12 10:30:44.626370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.728 pt3 00:16:50.728 10:30:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:50.728 10:30:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:50.728 10:30:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:50.986 [2024-07-12 10:30:44.843702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.986 [2024-07-12 10:30:44.845631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.986 [2024-07-12 10:30:44.845813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.986 [2024-07-12 10:30:44.846030] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:50.986 [2024-07-12 10:30:44.846135] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.986 [2024-07-12 10:30:44.846299] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:50.986 [2024-07-12 10:30:44.846681] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:50.986 [2024-07-12 10:30:44.846798] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:50.986 [2024-07-12 10:30:44.847038] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.986 10:30:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.245 10:30:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.245 "name": "raid_bdev1", 00:16:51.245 "uuid": "14609caa-d233-42ea-b094-d5c70ef788d4", 00:16:51.245 "strip_size_kb": 64, 00:16:51.245 "state": "online", 00:16:51.245 "raid_level": "raid0", 00:16:51.245 "superblock": true, 00:16:51.245 "num_base_bdevs": 3, 00:16:51.245 "num_base_bdevs_discovered": 3, 00:16:51.245 "num_base_bdevs_operational": 3, 00:16:51.245 "base_bdevs_list": [ 00:16:51.245 { 00:16:51.245 "name": "pt1", 00:16:51.245 "uuid": "25f5443c-59c9-5748-84c2-dde2da59afb4", 00:16:51.245 "is_configured": true, 00:16:51.245 "data_offset": 2048, 00:16:51.245 "data_size": 63488 00:16:51.245 }, 00:16:51.245 { 00:16:51.245 "name": "pt2", 00:16:51.245 "uuid": "236878d0-011e-5098-a82f-e84594eed52e", 00:16:51.245 "is_configured": true, 00:16:51.245 "data_offset": 2048, 00:16:51.245 "data_size": 63488 00:16:51.245 }, 00:16:51.245 { 00:16:51.245 "name": "pt3", 00:16:51.245 "uuid": "4637f688-24a2-536c-b91c-4676e2e16de4", 00:16:51.245 "is_configured": true, 00:16:51.245 "data_offset": 2048, 00:16:51.245 "data_size": 63488 00:16:51.245 } 00:16:51.245 ] 00:16:51.245 }' 00:16:51.245 10:30:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.245 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.812 10:30:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.812 10:30:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:52.070 [2024-07-12 10:30:45.904086] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.070 10:30:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=14609caa-d233-42ea-b094-d5c70ef788d4 00:16:52.070 10:30:45 -- bdev/bdev_raid.sh@380 -- # '[' -z 14609caa-d233-42ea-b094-d5c70ef788d4 ']' 00:16:52.070 10:30:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:52.327 [2024-07-12 10:30:46.147936] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.327 [2024-07-12 10:30:46.148093] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.327 [2024-07-12 10:30:46.148259] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.327 [2024-07-12 10:30:46.148464] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.327 [2024-07-12 10:30:46.148586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:52.327 10:30:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.327 10:30:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:52.584 10:30:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:52.584 10:30:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:52.584 10:30:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.584 10:30:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:52.842 10:30:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.842 10:30:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:53.100 10:30:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.100 10:30:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:53.100 10:30:46 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:53.100 10:30:46 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:53.358 10:30:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:53.358 10:30:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.358 10:30:47 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.358 10:30:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.358 10:30:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.358 10:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.358 10:30:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.358 10:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.358 10:30:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.358 10:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.358 10:30:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.358 10:30:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:53.358 10:30:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.616 [2024-07-12 10:30:47.364124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:53.616 [2024-07-12 10:30:47.366086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:53.616 [2024-07-12 10:30:47.366268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:53.616 [2024-07-12 10:30:47.366354] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:53.616 [2024-07-12 10:30:47.366512] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:53.616 [2024-07-12 10:30:47.366581] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:53.616 [2024-07-12 10:30:47.366654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.616 [2024-07-12 10:30:47.366687] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:53.616 request: 00:16:53.616 { 00:16:53.616 "name": "raid_bdev1", 00:16:53.616 "raid_level": "raid0", 00:16:53.616 "base_bdevs": [ 00:16:53.616 "malloc1", 00:16:53.616 "malloc2", 00:16:53.616 "malloc3" 00:16:53.616 ], 00:16:53.616 "superblock": false, 00:16:53.616 "strip_size_kb": 64, 00:16:53.616 "method": "bdev_raid_create", 00:16:53.616 "req_id": 1 00:16:53.616 } 00:16:53.616 Got JSON-RPC error response 00:16:53.616 response: 00:16:53.616 { 00:16:53.616 "code": -17, 00:16:53.616 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:53.616 } 00:16:53.616 10:30:47 -- common/autotest_common.sh@643 -- # es=1 00:16:53.616 10:30:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.616 10:30:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.616 10:30:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.616 10:30:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.616 10:30:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:53.874 10:30:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:53.874 10:30:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:53.874 10:30:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.133 [2024-07-12 10:30:47.816132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.133 [2024-07-12 10:30:47.816326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.133 [2024-07-12 10:30:47.816393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:54.133 [2024-07-12 10:30:47.816509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.133 [2024-07-12 10:30:47.818778] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.133 [2024-07-12 10:30:47.818957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.133 [2024-07-12 10:30:47.819179] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:54.133 [2024-07-12 10:30:47.819374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.133 pt1 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.133 10:30:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.133 10:30:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.133 "name": "raid_bdev1", 00:16:54.133 "uuid": "14609caa-d233-42ea-b094-d5c70ef788d4", 00:16:54.133 "strip_size_kb": 64, 00:16:54.133 "state": "configuring", 00:16:54.133 "raid_level": "raid0", 00:16:54.133 "superblock": true, 00:16:54.133 "num_base_bdevs": 3, 00:16:54.133 "num_base_bdevs_discovered": 1, 00:16:54.133 "num_base_bdevs_operational": 3, 00:16:54.133 "base_bdevs_list": [ 00:16:54.133 { 00:16:54.133 "name": "pt1", 00:16:54.133 "uuid": "25f5443c-59c9-5748-84c2-dde2da59afb4", 00:16:54.133 "is_configured": true, 00:16:54.133 "data_offset": 2048, 00:16:54.133 "data_size": 63488 00:16:54.133 }, 00:16:54.133 { 00:16:54.133 "name": null, 00:16:54.133 "uuid": "236878d0-011e-5098-a82f-e84594eed52e", 00:16:54.133 "is_configured": false, 00:16:54.133 "data_offset": 2048, 00:16:54.133 "data_size": 63488 00:16:54.133 }, 00:16:54.133 { 00:16:54.133 "name": null, 00:16:54.133 "uuid": "4637f688-24a2-536c-b91c-4676e2e16de4", 00:16:54.133 "is_configured": false, 00:16:54.133 "data_offset": 2048, 00:16:54.133 "data_size": 63488 00:16:54.133 } 00:16:54.133 ] 00:16:54.133 }' 00:16:54.133 10:30:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.133 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:54.701 10:30:48 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:54.701 10:30:48 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.959 [2024-07-12 10:30:48.832280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.959 [2024-07-12 10:30:48.832458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.959 [2024-07-12 10:30:48.832529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:54.959 [2024-07-12 10:30:48.832646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.959 [2024-07-12 10:30:48.833101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.959 [2024-07-12 10:30:48.833157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.959 [2024-07-12 10:30:48.833372] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:54.959 [2024-07-12 10:30:48.833427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.959 pt2 00:16:54.959 10:30:48 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:55.217 [2024-07-12 10:30:49.016344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.217 10:30:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.475 10:30:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.475 "name": "raid_bdev1", 00:16:55.475 "uuid": "14609caa-d233-42ea-b094-d5c70ef788d4", 00:16:55.475 "strip_size_kb": 64, 00:16:55.475 "state": "configuring", 00:16:55.475 "raid_level": "raid0", 00:16:55.475 "superblock": true, 00:16:55.475 "num_base_bdevs": 3, 00:16:55.475 "num_base_bdevs_discovered": 1, 00:16:55.475 "num_base_bdevs_operational": 3, 00:16:55.475 "base_bdevs_list": [ 00:16:55.475 { 00:16:55.475 "name": "pt1", 00:16:55.475 "uuid": "25f5443c-59c9-5748-84c2-dde2da59afb4", 00:16:55.475 "is_configured": true, 00:16:55.475 "data_offset": 2048, 00:16:55.475 "data_size": 63488 00:16:55.475 }, 00:16:55.475 { 00:16:55.475 "name": null, 00:16:55.475 "uuid": "236878d0-011e-5098-a82f-e84594eed52e", 00:16:55.475 "is_configured": false, 00:16:55.475 "data_offset": 2048, 00:16:55.475 "data_size": 63488 00:16:55.475 }, 00:16:55.475 { 00:16:55.475 "name": null, 00:16:55.475 "uuid": "4637f688-24a2-536c-b91c-4676e2e16de4", 00:16:55.475 "is_configured": false, 00:16:55.475 "data_offset": 2048, 00:16:55.475 "data_size": 63488 00:16:55.475 } 00:16:55.475 ] 00:16:55.475 }' 00:16:55.475 10:30:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.475 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:16:56.040 10:30:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:56.040 10:30:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:56.040 10:30:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.298 [2024-07-12 10:30:50.144482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.298 [2024-07-12 10:30:50.144661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.298 [2024-07-12 10:30:50.144723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:56.298 [2024-07-12 10:30:50.144887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.298 [2024-07-12 10:30:50.145353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.298 [2024-07-12 10:30:50.145413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.298 [2024-07-12 10:30:50.145628] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:56.298 [2024-07-12 10:30:50.145682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.298 pt2 00:16:56.298 10:30:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:56.298 10:30:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:56.298 10:30:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.555 [2024-07-12 10:30:50.320524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.555 [2024-07-12 10:30:50.320721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.555 [2024-07-12 10:30:50.320796] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:56.556 [2024-07-12 10:30:50.320911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.556 [2024-07-12 10:30:50.321331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.556 [2024-07-12 10:30:50.321392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.556 [2024-07-12 10:30:50.321593] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:56.556 [2024-07-12 10:30:50.321647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.556 [2024-07-12 10:30:50.321796] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:56.556 [2024-07-12 10:30:50.322044] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:56.556 [2024-07-12 10:30:50.322183] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:56.556 [2024-07-12 10:30:50.322618] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:56.556 [2024-07-12 10:30:50.322748] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:56.556 [2024-07-12 10:30:50.322953] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.556 pt3 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.556 10:30:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.814 10:30:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.814 "name": "raid_bdev1", 00:16:56.814 "uuid": "14609caa-d233-42ea-b094-d5c70ef788d4", 00:16:56.814 "strip_size_kb": 64, 00:16:56.814 "state": "online", 00:16:56.814 "raid_level": "raid0", 00:16:56.814 "superblock": true, 00:16:56.814 "num_base_bdevs": 3, 00:16:56.814 "num_base_bdevs_discovered": 3, 00:16:56.814 "num_base_bdevs_operational": 3, 00:16:56.814 "base_bdevs_list": [ 00:16:56.814 { 00:16:56.814 "name": "pt1", 00:16:56.814 "uuid": "25f5443c-59c9-5748-84c2-dde2da59afb4", 00:16:56.814 "is_configured": true, 00:16:56.814 "data_offset": 2048, 00:16:56.814 "data_size": 63488 00:16:56.814 }, 00:16:56.814 { 00:16:56.814 "name": "pt2", 00:16:56.814 "uuid": "236878d0-011e-5098-a82f-e84594eed52e", 00:16:56.814 "is_configured": true, 00:16:56.814 "data_offset": 2048, 00:16:56.814 "data_size": 63488 00:16:56.814 }, 00:16:56.814 { 00:16:56.814 "name": "pt3", 00:16:56.814 "uuid": "4637f688-24a2-536c-b91c-4676e2e16de4", 00:16:56.814 "is_configured": true, 00:16:56.814 "data_offset": 2048, 00:16:56.814 "data_size": 63488 00:16:56.814 } 00:16:56.814 ] 00:16:56.814 }' 00:16:56.814 10:30:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.814 10:30:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.380 10:30:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:57.380 10:30:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:57.638 [2024-07-12 10:30:51.328899] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.638 10:30:51 -- bdev/bdev_raid.sh@430 -- # '[' 14609caa-d233-42ea-b094-d5c70ef788d4 '!=' 14609caa-d233-42ea-b094-d5c70ef788d4 ']' 00:16:57.638 10:30:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:57.638 10:30:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:57.638 10:30:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:57.638 10:30:51 -- bdev/bdev_raid.sh@511 -- # killprocess 118688 00:16:57.638 10:30:51 -- common/autotest_common.sh@926 -- # '[' -z 118688 ']' 00:16:57.638 10:30:51 -- common/autotest_common.sh@930 -- # kill -0 118688 00:16:57.638 10:30:51 -- common/autotest_common.sh@931 -- # uname 00:16:57.638 10:30:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.638 10:30:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118688 00:16:57.639 killing process with pid 118688 00:16:57.639 10:30:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:57.639 10:30:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:57.639 10:30:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118688' 00:16:57.639 10:30:51 -- common/autotest_common.sh@945 -- # kill 118688 00:16:57.639 10:30:51 -- common/autotest_common.sh@950 -- # wait 118688 00:16:57.639 [2024-07-12 10:30:51.354264] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.639 [2024-07-12 10:30:51.354335] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.639 [2024-07-12 10:30:51.354415] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.639 [2024-07-12 10:30:51.354434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:57.639 [2024-07-12 10:30:51.543380] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.573 ************************************ 00:16:58.573 END TEST raid_superblock_test 00:16:58.573 ************************************ 00:16:58.573 10:30:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:58.573 00:16:58.573 real 0m10.138s 00:16:58.573 user 0m17.846s 00:16:58.573 sys 0m1.122s 00:16:58.573 10:30:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.573 10:30:52 -- common/autotest_common.sh@10 -- # set +x 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:58.832 10:30:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:58.832 10:30:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:58.832 10:30:52 -- common/autotest_common.sh@10 -- # set +x 00:16:58.832 ************************************ 00:16:58.832 START TEST raid_state_function_test 00:16:58.832 ************************************ 00:16:58.832 10:30:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=119011 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119011' 00:16:58.832 Process raid pid: 119011 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119011 /var/tmp/spdk-raid.sock 00:16:58.832 10:30:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:58.832 10:30:52 -- common/autotest_common.sh@819 -- # '[' -z 119011 ']' 00:16:58.832 10:30:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.832 10:30:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.832 10:30:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.832 10:30:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.832 10:30:52 -- common/autotest_common.sh@10 -- # set +x 00:16:58.832 [2024-07-12 10:30:52.558536] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:58.832 [2024-07-12 10:30:52.558732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.832 [2024-07-12 10:30:52.706036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.090 [2024-07-12 10:30:52.870014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.347 [2024-07-12 10:30:53.041000] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.914 10:30:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.914 10:30:53 -- common/autotest_common.sh@852 -- # return 0 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:59.914 [2024-07-12 10:30:53.763447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.914 [2024-07-12 10:30:53.763521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.914 [2024-07-12 10:30:53.763534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.914 [2024-07-12 10:30:53.763553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.914 [2024-07-12 10:30:53.763560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:59.914 [2024-07-12 10:30:53.763600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.914 10:30:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.172 10:30:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.172 "name": "Existed_Raid", 00:17:00.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.172 "strip_size_kb": 64, 00:17:00.172 "state": "configuring", 00:17:00.172 "raid_level": "concat", 00:17:00.172 "superblock": false, 00:17:00.172 "num_base_bdevs": 3, 00:17:00.172 "num_base_bdevs_discovered": 0, 00:17:00.172 "num_base_bdevs_operational": 3, 00:17:00.172 "base_bdevs_list": [ 00:17:00.172 { 00:17:00.172 "name": "BaseBdev1", 00:17:00.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.172 "is_configured": false, 00:17:00.172 "data_offset": 0, 00:17:00.172 "data_size": 0 00:17:00.172 }, 00:17:00.172 { 00:17:00.172 "name": "BaseBdev2", 00:17:00.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.172 "is_configured": false, 00:17:00.172 "data_offset": 0, 00:17:00.172 "data_size": 0 00:17:00.172 }, 00:17:00.172 { 00:17:00.172 "name": "BaseBdev3", 00:17:00.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.172 "is_configured": false, 00:17:00.172 "data_offset": 0, 00:17:00.172 "data_size": 0 00:17:00.172 } 00:17:00.172 ] 00:17:00.172 }' 00:17:00.172 10:30:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.172 10:30:54 -- common/autotest_common.sh@10 -- # set +x 00:17:01.105 10:30:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:01.364 [2024-07-12 10:30:55.043549] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.364 [2024-07-12 10:30:55.043613] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:01.364 10:30:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:01.622 [2024-07-12 10:30:55.303617] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.622 [2024-07-12 10:30:55.303687] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.622 [2024-07-12 10:30:55.303699] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.622 [2024-07-12 10:30:55.303716] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.622 [2024-07-12 10:30:55.303723] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.622 [2024-07-12 10:30:55.303765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.622 10:30:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.881 [2024-07-12 10:30:55.561345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.881 BaseBdev1 00:17:01.881 10:30:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:01.881 10:30:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:01.881 10:30:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:01.881 10:30:55 -- common/autotest_common.sh@889 -- # local i 00:17:01.881 10:30:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:01.882 10:30:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:01.882 10:30:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.882 10:30:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.141 [ 00:17:02.141 { 00:17:02.141 "name": "BaseBdev1", 00:17:02.141 "aliases": [ 00:17:02.141 "49169942-1d45-46fa-8812-5a5ff8c0d404" 00:17:02.141 ], 00:17:02.141 "product_name": "Malloc disk", 00:17:02.141 "block_size": 512, 00:17:02.141 "num_blocks": 65536, 00:17:02.141 "uuid": "49169942-1d45-46fa-8812-5a5ff8c0d404", 00:17:02.141 "assigned_rate_limits": { 00:17:02.141 "rw_ios_per_sec": 0, 00:17:02.141 "rw_mbytes_per_sec": 0, 00:17:02.141 "r_mbytes_per_sec": 0, 00:17:02.141 "w_mbytes_per_sec": 0 00:17:02.141 }, 00:17:02.141 "claimed": true, 00:17:02.141 "claim_type": "exclusive_write", 00:17:02.141 "zoned": false, 00:17:02.141 "supported_io_types": { 00:17:02.141 "read": true, 00:17:02.141 "write": true, 00:17:02.141 "unmap": true, 00:17:02.141 "write_zeroes": true, 00:17:02.141 "flush": true, 00:17:02.141 "reset": true, 00:17:02.141 "compare": false, 00:17:02.141 "compare_and_write": false, 00:17:02.141 "abort": true, 00:17:02.141 "nvme_admin": false, 00:17:02.141 "nvme_io": false 00:17:02.141 }, 00:17:02.141 "memory_domains": [ 00:17:02.141 { 00:17:02.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.141 "dma_device_type": 2 00:17:02.141 } 00:17:02.141 ], 00:17:02.141 "driver_specific": {} 00:17:02.141 } 00:17:02.141 ] 00:17:02.141 10:30:55 -- common/autotest_common.sh@895 -- # return 0 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.141 10:30:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.400 10:30:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.400 "name": "Existed_Raid", 00:17:02.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.400 "strip_size_kb": 64, 00:17:02.400 "state": "configuring", 00:17:02.400 "raid_level": "concat", 00:17:02.400 "superblock": false, 00:17:02.400 "num_base_bdevs": 3, 00:17:02.400 "num_base_bdevs_discovered": 1, 00:17:02.400 "num_base_bdevs_operational": 3, 00:17:02.400 "base_bdevs_list": [ 00:17:02.400 { 00:17:02.400 "name": "BaseBdev1", 00:17:02.400 "uuid": "49169942-1d45-46fa-8812-5a5ff8c0d404", 00:17:02.400 "is_configured": true, 00:17:02.400 "data_offset": 0, 00:17:02.400 "data_size": 65536 00:17:02.400 }, 00:17:02.400 { 00:17:02.400 "name": "BaseBdev2", 00:17:02.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.400 "is_configured": false, 00:17:02.400 "data_offset": 0, 00:17:02.400 "data_size": 0 00:17:02.400 }, 00:17:02.400 { 00:17:02.400 "name": "BaseBdev3", 00:17:02.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.400 "is_configured": false, 00:17:02.400 "data_offset": 0, 00:17:02.400 "data_size": 0 00:17:02.400 } 00:17:02.400 ] 00:17:02.400 }' 00:17:02.400 10:30:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.400 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.967 10:30:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.225 [2024-07-12 10:30:56.893579] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.226 [2024-07-12 10:30:56.893617] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:03.226 10:30:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:03.226 10:30:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:03.484 [2024-07-12 10:30:57.157654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.484 [2024-07-12 10:30:57.159419] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.484 [2024-07-12 10:30:57.159485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.484 [2024-07-12 10:30:57.159496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.484 [2024-07-12 10:30:57.159520] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.484 10:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.743 10:30:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.743 "name": "Existed_Raid", 00:17:03.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.743 "strip_size_kb": 64, 00:17:03.743 "state": "configuring", 00:17:03.743 "raid_level": "concat", 00:17:03.743 "superblock": false, 00:17:03.743 "num_base_bdevs": 3, 00:17:03.743 "num_base_bdevs_discovered": 1, 00:17:03.743 "num_base_bdevs_operational": 3, 00:17:03.743 "base_bdevs_list": [ 00:17:03.743 { 00:17:03.743 "name": "BaseBdev1", 00:17:03.743 "uuid": "49169942-1d45-46fa-8812-5a5ff8c0d404", 00:17:03.743 "is_configured": true, 00:17:03.743 "data_offset": 0, 00:17:03.743 "data_size": 65536 00:17:03.743 }, 00:17:03.743 { 00:17:03.743 "name": "BaseBdev2", 00:17:03.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.743 "is_configured": false, 00:17:03.743 "data_offset": 0, 00:17:03.743 "data_size": 0 00:17:03.743 }, 00:17:03.743 { 00:17:03.743 "name": "BaseBdev3", 00:17:03.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.743 "is_configured": false, 00:17:03.743 "data_offset": 0, 00:17:03.743 "data_size": 0 00:17:03.743 } 00:17:03.743 ] 00:17:03.743 }' 00:17:03.743 10:30:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.743 10:30:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.310 10:30:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.569 [2024-07-12 10:30:58.356930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.569 BaseBdev2 00:17:04.569 10:30:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:04.569 10:30:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:04.569 10:30:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:04.569 10:30:58 -- common/autotest_common.sh@889 -- # local i 00:17:04.569 10:30:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:04.569 10:30:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:04.569 10:30:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.828 10:30:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.087 [ 00:17:05.087 { 00:17:05.087 "name": "BaseBdev2", 00:17:05.087 "aliases": [ 00:17:05.087 "10c7c7bb-ae5c-49d4-aeb7-2bd8c8825d22" 00:17:05.087 ], 00:17:05.087 "product_name": "Malloc disk", 00:17:05.087 "block_size": 512, 00:17:05.087 "num_blocks": 65536, 00:17:05.087 "uuid": "10c7c7bb-ae5c-49d4-aeb7-2bd8c8825d22", 00:17:05.087 "assigned_rate_limits": { 00:17:05.087 "rw_ios_per_sec": 0, 00:17:05.087 "rw_mbytes_per_sec": 0, 00:17:05.087 "r_mbytes_per_sec": 0, 00:17:05.087 "w_mbytes_per_sec": 0 00:17:05.087 }, 00:17:05.087 "claimed": true, 00:17:05.087 "claim_type": "exclusive_write", 00:17:05.087 "zoned": false, 00:17:05.087 "supported_io_types": { 00:17:05.087 "read": true, 00:17:05.087 "write": true, 00:17:05.087 "unmap": true, 00:17:05.087 "write_zeroes": true, 00:17:05.087 "flush": true, 00:17:05.087 "reset": true, 00:17:05.087 "compare": false, 00:17:05.087 "compare_and_write": false, 00:17:05.087 "abort": true, 00:17:05.087 "nvme_admin": false, 00:17:05.087 "nvme_io": false 00:17:05.087 }, 00:17:05.087 "memory_domains": [ 00:17:05.087 { 00:17:05.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.087 "dma_device_type": 2 00:17:05.087 } 00:17:05.087 ], 00:17:05.087 "driver_specific": {} 00:17:05.087 } 00:17:05.087 ] 00:17:05.087 10:30:58 -- common/autotest_common.sh@895 -- # return 0 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.087 10:30:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.346 10:30:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.346 "name": "Existed_Raid", 00:17:05.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.346 "strip_size_kb": 64, 00:17:05.346 "state": "configuring", 00:17:05.346 "raid_level": "concat", 00:17:05.346 "superblock": false, 00:17:05.346 "num_base_bdevs": 3, 00:17:05.346 "num_base_bdevs_discovered": 2, 00:17:05.346 "num_base_bdevs_operational": 3, 00:17:05.346 "base_bdevs_list": [ 00:17:05.346 { 00:17:05.346 "name": "BaseBdev1", 00:17:05.346 "uuid": "49169942-1d45-46fa-8812-5a5ff8c0d404", 00:17:05.346 "is_configured": true, 00:17:05.346 "data_offset": 0, 00:17:05.346 "data_size": 65536 00:17:05.346 }, 00:17:05.346 { 00:17:05.346 "name": "BaseBdev2", 00:17:05.346 "uuid": "10c7c7bb-ae5c-49d4-aeb7-2bd8c8825d22", 00:17:05.346 "is_configured": true, 00:17:05.346 "data_offset": 0, 00:17:05.346 "data_size": 65536 00:17:05.346 }, 00:17:05.346 { 00:17:05.346 "name": "BaseBdev3", 00:17:05.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.346 "is_configured": false, 00:17:05.346 "data_offset": 0, 00:17:05.346 "data_size": 0 00:17:05.346 } 00:17:05.346 ] 00:17:05.346 }' 00:17:05.346 10:30:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.346 10:30:59 -- common/autotest_common.sh@10 -- # set +x 00:17:05.913 10:30:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.173 [2024-07-12 10:30:59.999433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.173 [2024-07-12 10:30:59.999477] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:06.173 [2024-07-12 10:30:59.999487] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:06.173 [2024-07-12 10:30:59.999611] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:06.173 [2024-07-12 10:31:00.000022] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:06.173 [2024-07-12 10:31:00.000046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:06.173 [2024-07-12 10:31:00.000286] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.173 BaseBdev3 00:17:06.173 10:31:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:06.173 10:31:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:06.173 10:31:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:06.173 10:31:00 -- common/autotest_common.sh@889 -- # local i 00:17:06.173 10:31:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:06.173 10:31:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:06.173 10:31:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.432 10:31:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:06.691 [ 00:17:06.691 { 00:17:06.691 "name": "BaseBdev3", 00:17:06.691 "aliases": [ 00:17:06.691 "d5454bbe-4bac-451b-8aab-bfccd7a3a0f4" 00:17:06.691 ], 00:17:06.691 "product_name": "Malloc disk", 00:17:06.691 "block_size": 512, 00:17:06.691 "num_blocks": 65536, 00:17:06.691 "uuid": "d5454bbe-4bac-451b-8aab-bfccd7a3a0f4", 00:17:06.691 "assigned_rate_limits": { 00:17:06.691 "rw_ios_per_sec": 0, 00:17:06.691 "rw_mbytes_per_sec": 0, 00:17:06.691 "r_mbytes_per_sec": 0, 00:17:06.691 "w_mbytes_per_sec": 0 00:17:06.691 }, 00:17:06.691 "claimed": true, 00:17:06.691 "claim_type": "exclusive_write", 00:17:06.691 "zoned": false, 00:17:06.691 "supported_io_types": { 00:17:06.691 "read": true, 00:17:06.691 "write": true, 00:17:06.691 "unmap": true, 00:17:06.691 "write_zeroes": true, 00:17:06.691 "flush": true, 00:17:06.691 "reset": true, 00:17:06.691 "compare": false, 00:17:06.691 "compare_and_write": false, 00:17:06.691 "abort": true, 00:17:06.691 "nvme_admin": false, 00:17:06.691 "nvme_io": false 00:17:06.691 }, 00:17:06.691 "memory_domains": [ 00:17:06.691 { 00:17:06.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.691 "dma_device_type": 2 00:17:06.691 } 00:17:06.691 ], 00:17:06.691 "driver_specific": {} 00:17:06.691 } 00:17:06.691 ] 00:17:06.691 10:31:00 -- common/autotest_common.sh@895 -- # return 0 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.691 10:31:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.949 10:31:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.949 "name": "Existed_Raid", 00:17:06.949 "uuid": "0a308afc-2b7d-4d98-8590-e71942435708", 00:17:06.949 "strip_size_kb": 64, 00:17:06.949 "state": "online", 00:17:06.949 "raid_level": "concat", 00:17:06.949 "superblock": false, 00:17:06.949 "num_base_bdevs": 3, 00:17:06.949 "num_base_bdevs_discovered": 3, 00:17:06.949 "num_base_bdevs_operational": 3, 00:17:06.949 "base_bdevs_list": [ 00:17:06.949 { 00:17:06.949 "name": "BaseBdev1", 00:17:06.949 "uuid": "49169942-1d45-46fa-8812-5a5ff8c0d404", 00:17:06.949 "is_configured": true, 00:17:06.949 "data_offset": 0, 00:17:06.949 "data_size": 65536 00:17:06.949 }, 00:17:06.949 { 00:17:06.949 "name": "BaseBdev2", 00:17:06.949 "uuid": "10c7c7bb-ae5c-49d4-aeb7-2bd8c8825d22", 00:17:06.949 "is_configured": true, 00:17:06.949 "data_offset": 0, 00:17:06.949 "data_size": 65536 00:17:06.949 }, 00:17:06.949 { 00:17:06.949 "name": "BaseBdev3", 00:17:06.949 "uuid": "d5454bbe-4bac-451b-8aab-bfccd7a3a0f4", 00:17:06.950 "is_configured": true, 00:17:06.950 "data_offset": 0, 00:17:06.950 "data_size": 65536 00:17:06.950 } 00:17:06.950 ] 00:17:06.950 }' 00:17:06.950 10:31:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.950 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:17:07.516 10:31:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:07.774 [2024-07-12 10:31:01.587813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.774 [2024-07-12 10:31:01.587843] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.774 [2024-07-12 10:31:01.587897] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.774 10:31:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:07.774 10:31:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:07.774 10:31:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:07.774 10:31:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:07.774 10:31:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.775 10:31:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.033 10:31:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.033 "name": "Existed_Raid", 00:17:08.033 "uuid": "0a308afc-2b7d-4d98-8590-e71942435708", 00:17:08.033 "strip_size_kb": 64, 00:17:08.033 "state": "offline", 00:17:08.033 "raid_level": "concat", 00:17:08.033 "superblock": false, 00:17:08.033 "num_base_bdevs": 3, 00:17:08.033 "num_base_bdevs_discovered": 2, 00:17:08.033 "num_base_bdevs_operational": 2, 00:17:08.033 "base_bdevs_list": [ 00:17:08.033 { 00:17:08.033 "name": null, 00:17:08.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.033 "is_configured": false, 00:17:08.033 "data_offset": 0, 00:17:08.033 "data_size": 65536 00:17:08.033 }, 00:17:08.033 { 00:17:08.033 "name": "BaseBdev2", 00:17:08.033 "uuid": "10c7c7bb-ae5c-49d4-aeb7-2bd8c8825d22", 00:17:08.033 "is_configured": true, 00:17:08.033 "data_offset": 0, 00:17:08.033 "data_size": 65536 00:17:08.033 }, 00:17:08.033 { 00:17:08.033 "name": "BaseBdev3", 00:17:08.033 "uuid": "d5454bbe-4bac-451b-8aab-bfccd7a3a0f4", 00:17:08.033 "is_configured": true, 00:17:08.033 "data_offset": 0, 00:17:08.033 "data_size": 65536 00:17:08.033 } 00:17:08.033 ] 00:17:08.033 }' 00:17:08.033 10:31:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.033 10:31:01 -- common/autotest_common.sh@10 -- # set +x 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:08.968 10:31:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:09.227 [2024-07-12 10:31:02.955904] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.227 10:31:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:09.227 10:31:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:09.227 10:31:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.227 10:31:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:09.484 10:31:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:09.484 10:31:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.484 10:31:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:09.741 [2024-07-12 10:31:03.443568] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:09.741 [2024-07-12 10:31:03.443620] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:09.741 10:31:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:09.741 10:31:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:09.741 10:31:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.741 10:31:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.000 10:31:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:10.000 10:31:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:10.000 10:31:03 -- bdev/bdev_raid.sh@287 -- # killprocess 119011 00:17:10.000 10:31:03 -- common/autotest_common.sh@926 -- # '[' -z 119011 ']' 00:17:10.000 10:31:03 -- common/autotest_common.sh@930 -- # kill -0 119011 00:17:10.000 10:31:03 -- common/autotest_common.sh@931 -- # uname 00:17:10.000 10:31:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.000 10:31:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119011 00:17:10.000 killing process with pid 119011 00:17:10.000 10:31:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:10.000 10:31:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:10.000 10:31:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119011' 00:17:10.000 10:31:03 -- common/autotest_common.sh@945 -- # kill 119011 00:17:10.000 10:31:03 -- common/autotest_common.sh@950 -- # wait 119011 00:17:10.000 [2024-07-12 10:31:03.781381] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.000 [2024-07-12 10:31:03.781556] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.934 ************************************ 00:17:10.934 END TEST raid_state_function_test 00:17:10.934 ************************************ 00:17:10.934 10:31:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:10.934 00:17:10.934 real 0m12.276s 00:17:10.934 user 0m21.922s 00:17:10.934 sys 0m1.354s 00:17:10.934 10:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.934 10:31:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:10.935 10:31:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:10.935 10:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:10.935 10:31:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.935 ************************************ 00:17:10.935 START TEST raid_state_function_test_sb 00:17:10.935 ************************************ 00:17:10.935 10:31:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=119425 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119425' 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:10.935 Process raid pid: 119425 00:17:10.935 10:31:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119425 /var/tmp/spdk-raid.sock 00:17:10.935 10:31:04 -- common/autotest_common.sh@819 -- # '[' -z 119425 ']' 00:17:10.935 10:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:10.935 10:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.935 10:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:10.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:10.935 10:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.935 10:31:04 -- common/autotest_common.sh@10 -- # set +x 00:17:11.192 [2024-07-12 10:31:04.910341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:11.192 [2024-07-12 10:31:04.910552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.192 [2024-07-12 10:31:05.081861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.449 [2024-07-12 10:31:05.290351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.707 [2024-07-12 10:31:05.460944] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.964 10:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.964 10:31:05 -- common/autotest_common.sh@852 -- # return 0 00:17:11.964 10:31:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:12.223 [2024-07-12 10:31:05.974259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.223 [2024-07-12 10:31:05.974336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.223 [2024-07-12 10:31:05.974349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.223 [2024-07-12 10:31:05.974367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.223 [2024-07-12 10:31:05.974374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:12.223 [2024-07-12 10:31:05.974407] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.223 10:31:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.481 10:31:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.481 "name": "Existed_Raid", 00:17:12.481 "uuid": "313050de-5688-420e-9004-0754b1fb0912", 00:17:12.481 "strip_size_kb": 64, 00:17:12.481 "state": "configuring", 00:17:12.481 "raid_level": "concat", 00:17:12.481 "superblock": true, 00:17:12.481 "num_base_bdevs": 3, 00:17:12.481 "num_base_bdevs_discovered": 0, 00:17:12.481 "num_base_bdevs_operational": 3, 00:17:12.481 "base_bdevs_list": [ 00:17:12.481 { 00:17:12.481 "name": "BaseBdev1", 00:17:12.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.481 "is_configured": false, 00:17:12.481 "data_offset": 0, 00:17:12.481 "data_size": 0 00:17:12.481 }, 00:17:12.481 { 00:17:12.481 "name": "BaseBdev2", 00:17:12.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.481 "is_configured": false, 00:17:12.481 "data_offset": 0, 00:17:12.481 "data_size": 0 00:17:12.481 }, 00:17:12.481 { 00:17:12.481 "name": "BaseBdev3", 00:17:12.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.481 "is_configured": false, 00:17:12.481 "data_offset": 0, 00:17:12.481 "data_size": 0 00:17:12.481 } 00:17:12.481 ] 00:17:12.481 }' 00:17:12.481 10:31:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.481 10:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:13.048 10:31:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.307 [2024-07-12 10:31:07.094478] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.307 [2024-07-12 10:31:07.094520] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:13.307 10:31:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:13.565 [2024-07-12 10:31:07.326536] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.566 [2024-07-12 10:31:07.326595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.566 [2024-07-12 10:31:07.326608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.566 [2024-07-12 10:31:07.326626] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.566 [2024-07-12 10:31:07.326633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.566 [2024-07-12 10:31:07.326665] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.566 10:31:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.824 [2024-07-12 10:31:07.547698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.824 BaseBdev1 00:17:13.824 10:31:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:13.824 10:31:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:13.824 10:31:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:13.824 10:31:07 -- common/autotest_common.sh@889 -- # local i 00:17:13.824 10:31:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:13.824 10:31:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:13.824 10:31:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.083 10:31:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.083 [ 00:17:14.083 { 00:17:14.083 "name": "BaseBdev1", 00:17:14.083 "aliases": [ 00:17:14.083 "c1f2e099-b361-4cf2-aad9-7e9536a3b8a8" 00:17:14.083 ], 00:17:14.083 "product_name": "Malloc disk", 00:17:14.083 "block_size": 512, 00:17:14.083 "num_blocks": 65536, 00:17:14.083 "uuid": "c1f2e099-b361-4cf2-aad9-7e9536a3b8a8", 00:17:14.083 "assigned_rate_limits": { 00:17:14.083 "rw_ios_per_sec": 0, 00:17:14.083 "rw_mbytes_per_sec": 0, 00:17:14.083 "r_mbytes_per_sec": 0, 00:17:14.083 "w_mbytes_per_sec": 0 00:17:14.083 }, 00:17:14.083 "claimed": true, 00:17:14.083 "claim_type": "exclusive_write", 00:17:14.083 "zoned": false, 00:17:14.083 "supported_io_types": { 00:17:14.083 "read": true, 00:17:14.083 "write": true, 00:17:14.083 "unmap": true, 00:17:14.083 "write_zeroes": true, 00:17:14.083 "flush": true, 00:17:14.083 "reset": true, 00:17:14.083 "compare": false, 00:17:14.083 "compare_and_write": false, 00:17:14.083 "abort": true, 00:17:14.083 "nvme_admin": false, 00:17:14.083 "nvme_io": false 00:17:14.083 }, 00:17:14.083 "memory_domains": [ 00:17:14.083 { 00:17:14.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.083 "dma_device_type": 2 00:17:14.083 } 00:17:14.083 ], 00:17:14.083 "driver_specific": {} 00:17:14.083 } 00:17:14.083 ] 00:17:14.083 10:31:07 -- common/autotest_common.sh@895 -- # return 0 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.083 10:31:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.341 10:31:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.341 "name": "Existed_Raid", 00:17:14.341 "uuid": "e98e5671-ebac-428d-bb55-f098c81b4af4", 00:17:14.341 "strip_size_kb": 64, 00:17:14.341 "state": "configuring", 00:17:14.341 "raid_level": "concat", 00:17:14.341 "superblock": true, 00:17:14.341 "num_base_bdevs": 3, 00:17:14.341 "num_base_bdevs_discovered": 1, 00:17:14.341 "num_base_bdevs_operational": 3, 00:17:14.341 "base_bdevs_list": [ 00:17:14.341 { 00:17:14.341 "name": "BaseBdev1", 00:17:14.341 "uuid": "c1f2e099-b361-4cf2-aad9-7e9536a3b8a8", 00:17:14.341 "is_configured": true, 00:17:14.341 "data_offset": 2048, 00:17:14.341 "data_size": 63488 00:17:14.341 }, 00:17:14.341 { 00:17:14.341 "name": "BaseBdev2", 00:17:14.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.341 "is_configured": false, 00:17:14.341 "data_offset": 0, 00:17:14.341 "data_size": 0 00:17:14.341 }, 00:17:14.341 { 00:17:14.341 "name": "BaseBdev3", 00:17:14.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.341 "is_configured": false, 00:17:14.341 "data_offset": 0, 00:17:14.341 "data_size": 0 00:17:14.341 } 00:17:14.341 ] 00:17:14.341 }' 00:17:14.341 10:31:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.341 10:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:14.909 10:31:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.168 [2024-07-12 10:31:09.033040] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.168 [2024-07-12 10:31:09.033098] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:15.168 10:31:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:15.168 10:31:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.734 10:31:09 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:15.734 BaseBdev1 00:17:15.734 10:31:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:15.734 10:31:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:15.734 10:31:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:15.734 10:31:09 -- common/autotest_common.sh@889 -- # local i 00:17:15.734 10:31:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:15.734 10:31:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:15.734 10:31:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.992 10:31:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:16.251 [ 00:17:16.251 { 00:17:16.251 "name": "BaseBdev1", 00:17:16.251 "aliases": [ 00:17:16.251 "e243b303-1875-4fea-8fc7-9e011e37a000" 00:17:16.251 ], 00:17:16.251 "product_name": "Malloc disk", 00:17:16.251 "block_size": 512, 00:17:16.251 "num_blocks": 65536, 00:17:16.251 "uuid": "e243b303-1875-4fea-8fc7-9e011e37a000", 00:17:16.251 "assigned_rate_limits": { 00:17:16.251 "rw_ios_per_sec": 0, 00:17:16.251 "rw_mbytes_per_sec": 0, 00:17:16.251 "r_mbytes_per_sec": 0, 00:17:16.251 "w_mbytes_per_sec": 0 00:17:16.251 }, 00:17:16.251 "claimed": false, 00:17:16.251 "zoned": false, 00:17:16.251 "supported_io_types": { 00:17:16.251 "read": true, 00:17:16.251 "write": true, 00:17:16.251 "unmap": true, 00:17:16.251 "write_zeroes": true, 00:17:16.251 "flush": true, 00:17:16.251 "reset": true, 00:17:16.251 "compare": false, 00:17:16.251 "compare_and_write": false, 00:17:16.251 "abort": true, 00:17:16.251 "nvme_admin": false, 00:17:16.251 "nvme_io": false 00:17:16.251 }, 00:17:16.251 "memory_domains": [ 00:17:16.251 { 00:17:16.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.251 "dma_device_type": 2 00:17:16.251 } 00:17:16.251 ], 00:17:16.251 "driver_specific": {} 00:17:16.251 } 00:17:16.251 ] 00:17:16.251 10:31:09 -- common/autotest_common.sh@895 -- # return 0 00:17:16.251 10:31:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:16.251 [2024-07-12 10:31:10.104466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.251 [2024-07-12 10:31:10.106106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.251 [2024-07-12 10:31:10.106160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.251 [2024-07-12 10:31:10.106187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:16.251 [2024-07-12 10:31:10.106210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.251 10:31:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.509 10:31:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.509 "name": "Existed_Raid", 00:17:16.509 "uuid": "f93d5ded-3894-453a-84d2-3b8a1beac3dd", 00:17:16.509 "strip_size_kb": 64, 00:17:16.509 "state": "configuring", 00:17:16.509 "raid_level": "concat", 00:17:16.509 "superblock": true, 00:17:16.509 "num_base_bdevs": 3, 00:17:16.509 "num_base_bdevs_discovered": 1, 00:17:16.509 "num_base_bdevs_operational": 3, 00:17:16.509 "base_bdevs_list": [ 00:17:16.509 { 00:17:16.509 "name": "BaseBdev1", 00:17:16.509 "uuid": "e243b303-1875-4fea-8fc7-9e011e37a000", 00:17:16.509 "is_configured": true, 00:17:16.509 "data_offset": 2048, 00:17:16.509 "data_size": 63488 00:17:16.509 }, 00:17:16.509 { 00:17:16.509 "name": "BaseBdev2", 00:17:16.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.509 "is_configured": false, 00:17:16.509 "data_offset": 0, 00:17:16.509 "data_size": 0 00:17:16.509 }, 00:17:16.509 { 00:17:16.509 "name": "BaseBdev3", 00:17:16.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.509 "is_configured": false, 00:17:16.509 "data_offset": 0, 00:17:16.509 "data_size": 0 00:17:16.509 } 00:17:16.509 ] 00:17:16.509 }' 00:17:16.509 10:31:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.509 10:31:10 -- common/autotest_common.sh@10 -- # set +x 00:17:17.077 10:31:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.336 [2024-07-12 10:31:11.246897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.336 BaseBdev2 00:17:17.595 10:31:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:17.595 10:31:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:17.595 10:31:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:17.595 10:31:11 -- common/autotest_common.sh@889 -- # local i 00:17:17.595 10:31:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:17.595 10:31:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:17.595 10:31:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.595 10:31:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.853 [ 00:17:17.853 { 00:17:17.853 "name": "BaseBdev2", 00:17:17.853 "aliases": [ 00:17:17.853 "3817c7be-0756-49c6-bf72-7ec4782c420b" 00:17:17.853 ], 00:17:17.853 "product_name": "Malloc disk", 00:17:17.853 "block_size": 512, 00:17:17.853 "num_blocks": 65536, 00:17:17.853 "uuid": "3817c7be-0756-49c6-bf72-7ec4782c420b", 00:17:17.853 "assigned_rate_limits": { 00:17:17.853 "rw_ios_per_sec": 0, 00:17:17.853 "rw_mbytes_per_sec": 0, 00:17:17.853 "r_mbytes_per_sec": 0, 00:17:17.853 "w_mbytes_per_sec": 0 00:17:17.853 }, 00:17:17.853 "claimed": true, 00:17:17.853 "claim_type": "exclusive_write", 00:17:17.853 "zoned": false, 00:17:17.853 "supported_io_types": { 00:17:17.853 "read": true, 00:17:17.853 "write": true, 00:17:17.853 "unmap": true, 00:17:17.853 "write_zeroes": true, 00:17:17.853 "flush": true, 00:17:17.853 "reset": true, 00:17:17.853 "compare": false, 00:17:17.853 "compare_and_write": false, 00:17:17.853 "abort": true, 00:17:17.853 "nvme_admin": false, 00:17:17.853 "nvme_io": false 00:17:17.853 }, 00:17:17.853 "memory_domains": [ 00:17:17.853 { 00:17:17.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.853 "dma_device_type": 2 00:17:17.853 } 00:17:17.853 ], 00:17:17.853 "driver_specific": {} 00:17:17.853 } 00:17:17.853 ] 00:17:17.853 10:31:11 -- common/autotest_common.sh@895 -- # return 0 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.853 10:31:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.112 10:31:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.112 "name": "Existed_Raid", 00:17:18.112 "uuid": "f93d5ded-3894-453a-84d2-3b8a1beac3dd", 00:17:18.112 "strip_size_kb": 64, 00:17:18.112 "state": "configuring", 00:17:18.112 "raid_level": "concat", 00:17:18.112 "superblock": true, 00:17:18.112 "num_base_bdevs": 3, 00:17:18.112 "num_base_bdevs_discovered": 2, 00:17:18.112 "num_base_bdevs_operational": 3, 00:17:18.112 "base_bdevs_list": [ 00:17:18.112 { 00:17:18.112 "name": "BaseBdev1", 00:17:18.112 "uuid": "e243b303-1875-4fea-8fc7-9e011e37a000", 00:17:18.112 "is_configured": true, 00:17:18.112 "data_offset": 2048, 00:17:18.112 "data_size": 63488 00:17:18.112 }, 00:17:18.112 { 00:17:18.112 "name": "BaseBdev2", 00:17:18.112 "uuid": "3817c7be-0756-49c6-bf72-7ec4782c420b", 00:17:18.112 "is_configured": true, 00:17:18.112 "data_offset": 2048, 00:17:18.112 "data_size": 63488 00:17:18.112 }, 00:17:18.112 { 00:17:18.112 "name": "BaseBdev3", 00:17:18.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.112 "is_configured": false, 00:17:18.112 "data_offset": 0, 00:17:18.112 "data_size": 0 00:17:18.112 } 00:17:18.112 ] 00:17:18.112 }' 00:17:18.112 10:31:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.112 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:18.679 10:31:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.953 [2024-07-12 10:31:12.798588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.953 [2024-07-12 10:31:12.798801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:18.953 [2024-07-12 10:31:12.798815] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.953 [2024-07-12 10:31:12.798935] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:18.953 BaseBdev3 00:17:18.953 [2024-07-12 10:31:12.799283] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:18.953 [2024-07-12 10:31:12.799296] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:18.953 [2024-07-12 10:31:12.799439] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.953 10:31:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:18.953 10:31:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:18.953 10:31:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:18.953 10:31:12 -- common/autotest_common.sh@889 -- # local i 00:17:18.953 10:31:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:18.953 10:31:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:18.953 10:31:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.265 10:31:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:19.523 [ 00:17:19.523 { 00:17:19.523 "name": "BaseBdev3", 00:17:19.523 "aliases": [ 00:17:19.523 "3f723364-cff0-49ee-b1fc-0ffa953d95f7" 00:17:19.523 ], 00:17:19.523 "product_name": "Malloc disk", 00:17:19.523 "block_size": 512, 00:17:19.523 "num_blocks": 65536, 00:17:19.523 "uuid": "3f723364-cff0-49ee-b1fc-0ffa953d95f7", 00:17:19.523 "assigned_rate_limits": { 00:17:19.523 "rw_ios_per_sec": 0, 00:17:19.523 "rw_mbytes_per_sec": 0, 00:17:19.523 "r_mbytes_per_sec": 0, 00:17:19.523 "w_mbytes_per_sec": 0 00:17:19.523 }, 00:17:19.523 "claimed": true, 00:17:19.523 "claim_type": "exclusive_write", 00:17:19.523 "zoned": false, 00:17:19.523 "supported_io_types": { 00:17:19.523 "read": true, 00:17:19.523 "write": true, 00:17:19.523 "unmap": true, 00:17:19.523 "write_zeroes": true, 00:17:19.523 "flush": true, 00:17:19.523 "reset": true, 00:17:19.523 "compare": false, 00:17:19.523 "compare_and_write": false, 00:17:19.523 "abort": true, 00:17:19.523 "nvme_admin": false, 00:17:19.523 "nvme_io": false 00:17:19.523 }, 00:17:19.523 "memory_domains": [ 00:17:19.523 { 00:17:19.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.523 "dma_device_type": 2 00:17:19.523 } 00:17:19.523 ], 00:17:19.523 "driver_specific": {} 00:17:19.523 } 00:17:19.523 ] 00:17:19.523 10:31:13 -- common/autotest_common.sh@895 -- # return 0 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.523 "name": "Existed_Raid", 00:17:19.523 "uuid": "f93d5ded-3894-453a-84d2-3b8a1beac3dd", 00:17:19.523 "strip_size_kb": 64, 00:17:19.523 "state": "online", 00:17:19.523 "raid_level": "concat", 00:17:19.523 "superblock": true, 00:17:19.523 "num_base_bdevs": 3, 00:17:19.523 "num_base_bdevs_discovered": 3, 00:17:19.523 "num_base_bdevs_operational": 3, 00:17:19.523 "base_bdevs_list": [ 00:17:19.523 { 00:17:19.523 "name": "BaseBdev1", 00:17:19.523 "uuid": "e243b303-1875-4fea-8fc7-9e011e37a000", 00:17:19.523 "is_configured": true, 00:17:19.523 "data_offset": 2048, 00:17:19.523 "data_size": 63488 00:17:19.523 }, 00:17:19.523 { 00:17:19.523 "name": "BaseBdev2", 00:17:19.523 "uuid": "3817c7be-0756-49c6-bf72-7ec4782c420b", 00:17:19.523 "is_configured": true, 00:17:19.523 "data_offset": 2048, 00:17:19.523 "data_size": 63488 00:17:19.523 }, 00:17:19.523 { 00:17:19.523 "name": "BaseBdev3", 00:17:19.523 "uuid": "3f723364-cff0-49ee-b1fc-0ffa953d95f7", 00:17:19.523 "is_configured": true, 00:17:19.523 "data_offset": 2048, 00:17:19.523 "data_size": 63488 00:17:19.523 } 00:17:19.523 ] 00:17:19.523 }' 00:17:19.523 10:31:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.523 10:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:20.457 [2024-07-12 10:31:14.246916] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.457 [2024-07-12 10:31:14.246942] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.457 [2024-07-12 10:31:14.246999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.457 10:31:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.716 10:31:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.716 "name": "Existed_Raid", 00:17:20.716 "uuid": "f93d5ded-3894-453a-84d2-3b8a1beac3dd", 00:17:20.716 "strip_size_kb": 64, 00:17:20.716 "state": "offline", 00:17:20.716 "raid_level": "concat", 00:17:20.716 "superblock": true, 00:17:20.716 "num_base_bdevs": 3, 00:17:20.716 "num_base_bdevs_discovered": 2, 00:17:20.716 "num_base_bdevs_operational": 2, 00:17:20.716 "base_bdevs_list": [ 00:17:20.716 { 00:17:20.716 "name": null, 00:17:20.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.716 "is_configured": false, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 }, 00:17:20.716 { 00:17:20.716 "name": "BaseBdev2", 00:17:20.716 "uuid": "3817c7be-0756-49c6-bf72-7ec4782c420b", 00:17:20.716 "is_configured": true, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 }, 00:17:20.716 { 00:17:20.716 "name": "BaseBdev3", 00:17:20.716 "uuid": "3f723364-cff0-49ee-b1fc-0ffa953d95f7", 00:17:20.716 "is_configured": true, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 } 00:17:20.716 ] 00:17:20.716 }' 00:17:20.716 10:31:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.716 10:31:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.282 10:31:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:21.282 10:31:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:21.282 10:31:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.282 10:31:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:21.540 10:31:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:21.540 10:31:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.540 10:31:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:21.799 [2024-07-12 10:31:15.533565] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:21.799 10:31:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:21.799 10:31:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:21.799 10:31:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.799 10:31:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:22.057 10:31:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:22.057 10:31:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.057 10:31:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:22.316 [2024-07-12 10:31:16.060087] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:22.316 [2024-07-12 10:31:16.060155] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:22.316 10:31:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:22.316 10:31:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:22.316 10:31:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.316 10:31:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:22.575 10:31:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:22.575 10:31:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:22.575 10:31:16 -- bdev/bdev_raid.sh@287 -- # killprocess 119425 00:17:22.575 10:31:16 -- common/autotest_common.sh@926 -- # '[' -z 119425 ']' 00:17:22.575 10:31:16 -- common/autotest_common.sh@930 -- # kill -0 119425 00:17:22.575 10:31:16 -- common/autotest_common.sh@931 -- # uname 00:17:22.575 10:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.575 10:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119425 00:17:22.575 10:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:22.575 killing process with pid 119425 00:17:22.575 10:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:22.575 10:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119425' 00:17:22.575 10:31:16 -- common/autotest_common.sh@945 -- # kill 119425 00:17:22.575 10:31:16 -- common/autotest_common.sh@950 -- # wait 119425 00:17:22.575 [2024-07-12 10:31:16.414014] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.575 [2024-07-12 10:31:16.414127] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.535 ************************************ 00:17:23.535 END TEST raid_state_function_test_sb 00:17:23.535 ************************************ 00:17:23.535 10:31:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:23.535 00:17:23.535 real 0m12.587s 00:17:23.535 user 0m22.408s 00:17:23.535 sys 0m1.391s 00:17:23.535 10:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.535 10:31:17 -- common/autotest_common.sh@10 -- # set +x 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:23.793 10:31:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:23.793 10:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:23.793 10:31:17 -- common/autotest_common.sh@10 -- # set +x 00:17:23.793 ************************************ 00:17:23.793 START TEST raid_superblock_test 00:17:23.793 ************************************ 00:17:23.793 10:31:17 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=119824 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119824 /var/tmp/spdk-raid.sock 00:17:23.793 10:31:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:23.793 10:31:17 -- common/autotest_common.sh@819 -- # '[' -z 119824 ']' 00:17:23.793 10:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:23.793 10:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:23.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:23.793 10:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:23.793 10:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:23.793 10:31:17 -- common/autotest_common.sh@10 -- # set +x 00:17:23.793 [2024-07-12 10:31:17.544697] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:23.793 [2024-07-12 10:31:17.544871] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119824 ] 00:17:23.793 [2024-07-12 10:31:17.699187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.052 [2024-07-12 10:31:17.879727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.311 [2024-07-12 10:31:18.064711] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.877 10:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.877 10:31:18 -- common/autotest_common.sh@852 -- # return 0 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:24.877 10:31:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.878 10:31:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.878 10:31:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.878 10:31:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:24.878 malloc1 00:17:24.878 10:31:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.136 [2024-07-12 10:31:18.890689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:25.136 [2024-07-12 10:31:18.890788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.136 [2024-07-12 10:31:18.890821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:25.136 [2024-07-12 10:31:18.890877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.136 [2024-07-12 10:31:18.893107] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.136 [2024-07-12 10:31:18.893153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:25.136 pt1 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:25.136 10:31:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:25.394 malloc2 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.394 [2024-07-12 10:31:19.298214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.394 [2024-07-12 10:31:19.298279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.394 [2024-07-12 10:31:19.298320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:25.394 [2024-07-12 10:31:19.298381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.394 [2024-07-12 10:31:19.300566] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.394 [2024-07-12 10:31:19.300617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.394 pt2 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:25.394 10:31:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:25.652 malloc3 00:17:25.652 10:31:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:25.909 [2024-07-12 10:31:19.734938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:25.909 [2024-07-12 10:31:19.735004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.910 [2024-07-12 10:31:19.735041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:25.910 [2024-07-12 10:31:19.735082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.910 [2024-07-12 10:31:19.737290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.910 [2024-07-12 10:31:19.737341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:25.910 pt3 00:17:25.910 10:31:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:25.910 10:31:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:25.910 10:31:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:26.167 [2024-07-12 10:31:19.910993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:26.167 [2024-07-12 10:31:19.912916] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.168 [2024-07-12 10:31:19.912982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:26.168 [2024-07-12 10:31:19.913169] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:26.168 [2024-07-12 10:31:19.913183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:26.168 [2024-07-12 10:31:19.913303] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:26.168 [2024-07-12 10:31:19.913635] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:26.168 [2024-07-12 10:31:19.913657] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:26.168 [2024-07-12 10:31:19.913779] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.168 10:31:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.425 10:31:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.426 "name": "raid_bdev1", 00:17:26.426 "uuid": "0bac5a1a-ed05-423d-aac7-040d07297c74", 00:17:26.426 "strip_size_kb": 64, 00:17:26.426 "state": "online", 00:17:26.426 "raid_level": "concat", 00:17:26.426 "superblock": true, 00:17:26.426 "num_base_bdevs": 3, 00:17:26.426 "num_base_bdevs_discovered": 3, 00:17:26.426 "num_base_bdevs_operational": 3, 00:17:26.426 "base_bdevs_list": [ 00:17:26.426 { 00:17:26.426 "name": "pt1", 00:17:26.426 "uuid": "aaa0ee7d-61b5-5f51-86ec-394300f812bb", 00:17:26.426 "is_configured": true, 00:17:26.426 "data_offset": 2048, 00:17:26.426 "data_size": 63488 00:17:26.426 }, 00:17:26.426 { 00:17:26.426 "name": "pt2", 00:17:26.426 "uuid": "c7046cd0-8ebf-5044-b381-865ef0b47295", 00:17:26.426 "is_configured": true, 00:17:26.426 "data_offset": 2048, 00:17:26.426 "data_size": 63488 00:17:26.426 }, 00:17:26.426 { 00:17:26.426 "name": "pt3", 00:17:26.426 "uuid": "dfb74ae8-809b-5e18-b80f-e1ab8221f1d2", 00:17:26.426 "is_configured": true, 00:17:26.426 "data_offset": 2048, 00:17:26.426 "data_size": 63488 00:17:26.426 } 00:17:26.426 ] 00:17:26.426 }' 00:17:26.426 10:31:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.426 10:31:20 -- common/autotest_common.sh@10 -- # set +x 00:17:26.992 10:31:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.992 10:31:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:27.250 [2024-07-12 10:31:20.971320] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.250 10:31:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0bac5a1a-ed05-423d-aac7-040d07297c74 00:17:27.250 10:31:20 -- bdev/bdev_raid.sh@380 -- # '[' -z 0bac5a1a-ed05-423d-aac7-040d07297c74 ']' 00:17:27.250 10:31:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:27.508 [2024-07-12 10:31:21.219175] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.508 [2024-07-12 10:31:21.219198] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.508 [2024-07-12 10:31:21.219256] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.508 [2024-07-12 10:31:21.219311] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.508 [2024-07-12 10:31:21.219323] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:27.508 10:31:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:27.766 10:31:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:27.766 10:31:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:28.023 10:31:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:28.023 10:31:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:28.281 10:31:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:28.281 10:31:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:28.539 10:31:22 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:28.539 10:31:22 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:28.539 10:31:22 -- common/autotest_common.sh@640 -- # local es=0 00:17:28.539 10:31:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:28.539 10:31:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.539 10:31:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:28.539 10:31:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.539 10:31:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:28.539 10:31:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.539 10:31:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:28.539 10:31:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.540 10:31:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:28.540 10:31:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:28.540 [2024-07-12 10:31:22.387325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:28.540 [2024-07-12 10:31:22.389210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:28.540 [2024-07-12 10:31:22.389261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:28.540 [2024-07-12 10:31:22.389304] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:28.540 [2024-07-12 10:31:22.389364] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:28.540 [2024-07-12 10:31:22.389397] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:28.540 [2024-07-12 10:31:22.389444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.540 [2024-07-12 10:31:22.389454] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:28.540 request: 00:17:28.540 { 00:17:28.540 "name": "raid_bdev1", 00:17:28.540 "raid_level": "concat", 00:17:28.540 "base_bdevs": [ 00:17:28.540 "malloc1", 00:17:28.540 "malloc2", 00:17:28.540 "malloc3" 00:17:28.540 ], 00:17:28.540 "superblock": false, 00:17:28.540 "strip_size_kb": 64, 00:17:28.540 "method": "bdev_raid_create", 00:17:28.540 "req_id": 1 00:17:28.540 } 00:17:28.540 Got JSON-RPC error response 00:17:28.540 response: 00:17:28.540 { 00:17:28.540 "code": -17, 00:17:28.540 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:28.540 } 00:17:28.540 10:31:22 -- common/autotest_common.sh@643 -- # es=1 00:17:28.540 10:31:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:28.540 10:31:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:28.540 10:31:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:28.540 10:31:22 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.540 10:31:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:28.798 10:31:22 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:28.798 10:31:22 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:28.798 10:31:22 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.057 [2024-07-12 10:31:22.751326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.057 [2024-07-12 10:31:22.751390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.057 [2024-07-12 10:31:22.751423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:29.057 [2024-07-12 10:31:22.751443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.057 [2024-07-12 10:31:22.753558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.057 [2024-07-12 10:31:22.753603] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.057 [2024-07-12 10:31:22.753699] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:29.057 [2024-07-12 10:31:22.753751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.057 pt1 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.057 "name": "raid_bdev1", 00:17:29.057 "uuid": "0bac5a1a-ed05-423d-aac7-040d07297c74", 00:17:29.057 "strip_size_kb": 64, 00:17:29.057 "state": "configuring", 00:17:29.057 "raid_level": "concat", 00:17:29.057 "superblock": true, 00:17:29.057 "num_base_bdevs": 3, 00:17:29.057 "num_base_bdevs_discovered": 1, 00:17:29.057 "num_base_bdevs_operational": 3, 00:17:29.057 "base_bdevs_list": [ 00:17:29.057 { 00:17:29.057 "name": "pt1", 00:17:29.057 "uuid": "aaa0ee7d-61b5-5f51-86ec-394300f812bb", 00:17:29.057 "is_configured": true, 00:17:29.057 "data_offset": 2048, 00:17:29.057 "data_size": 63488 00:17:29.057 }, 00:17:29.057 { 00:17:29.057 "name": null, 00:17:29.057 "uuid": "c7046cd0-8ebf-5044-b381-865ef0b47295", 00:17:29.057 "is_configured": false, 00:17:29.057 "data_offset": 2048, 00:17:29.057 "data_size": 63488 00:17:29.057 }, 00:17:29.057 { 00:17:29.057 "name": null, 00:17:29.057 "uuid": "dfb74ae8-809b-5e18-b80f-e1ab8221f1d2", 00:17:29.057 "is_configured": false, 00:17:29.057 "data_offset": 2048, 00:17:29.057 "data_size": 63488 00:17:29.057 } 00:17:29.057 ] 00:17:29.057 }' 00:17:29.057 10:31:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.057 10:31:22 -- common/autotest_common.sh@10 -- # set +x 00:17:29.989 10:31:23 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:29.989 10:31:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.989 [2024-07-12 10:31:23.795525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.989 [2024-07-12 10:31:23.795599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.989 [2024-07-12 10:31:23.795637] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:29.989 [2024-07-12 10:31:23.795658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.989 [2024-07-12 10:31:23.796007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.989 [2024-07-12 10:31:23.796044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.989 [2024-07-12 10:31:23.796136] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:29.989 [2024-07-12 10:31:23.796159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.989 pt2 00:17:29.989 10:31:23 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:30.248 [2024-07-12 10:31:24.027577] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.248 10:31:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.507 10:31:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.507 "name": "raid_bdev1", 00:17:30.507 "uuid": "0bac5a1a-ed05-423d-aac7-040d07297c74", 00:17:30.507 "strip_size_kb": 64, 00:17:30.507 "state": "configuring", 00:17:30.507 "raid_level": "concat", 00:17:30.507 "superblock": true, 00:17:30.507 "num_base_bdevs": 3, 00:17:30.507 "num_base_bdevs_discovered": 1, 00:17:30.507 "num_base_bdevs_operational": 3, 00:17:30.507 "base_bdevs_list": [ 00:17:30.507 { 00:17:30.507 "name": "pt1", 00:17:30.507 "uuid": "aaa0ee7d-61b5-5f51-86ec-394300f812bb", 00:17:30.507 "is_configured": true, 00:17:30.507 "data_offset": 2048, 00:17:30.507 "data_size": 63488 00:17:30.507 }, 00:17:30.507 { 00:17:30.507 "name": null, 00:17:30.507 "uuid": "c7046cd0-8ebf-5044-b381-865ef0b47295", 00:17:30.507 "is_configured": false, 00:17:30.507 "data_offset": 2048, 00:17:30.507 "data_size": 63488 00:17:30.507 }, 00:17:30.507 { 00:17:30.507 "name": null, 00:17:30.507 "uuid": "dfb74ae8-809b-5e18-b80f-e1ab8221f1d2", 00:17:30.507 "is_configured": false, 00:17:30.507 "data_offset": 2048, 00:17:30.507 "data_size": 63488 00:17:30.507 } 00:17:30.507 ] 00:17:30.507 }' 00:17:30.507 10:31:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.507 10:31:24 -- common/autotest_common.sh@10 -- # set +x 00:17:31.075 10:31:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:31.075 10:31:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:31.075 10:31:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.333 [2024-07-12 10:31:25.219742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.333 [2024-07-12 10:31:25.219801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.333 [2024-07-12 10:31:25.219832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:31.333 [2024-07-12 10:31:25.219867] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.333 [2024-07-12 10:31:25.220219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.333 [2024-07-12 10:31:25.220262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.333 [2024-07-12 10:31:25.220352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:31.333 [2024-07-12 10:31:25.220374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.333 pt2 00:17:31.333 10:31:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:31.333 10:31:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:31.333 10:31:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.592 [2024-07-12 10:31:25.403779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.592 [2024-07-12 10:31:25.403837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.592 [2024-07-12 10:31:25.403866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:31.592 [2024-07-12 10:31:25.403890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.592 [2024-07-12 10:31:25.404228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.592 [2024-07-12 10:31:25.404270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.592 [2024-07-12 10:31:25.404361] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:31.592 [2024-07-12 10:31:25.404396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.592 [2024-07-12 10:31:25.404494] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:31.592 [2024-07-12 10:31:25.404506] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:31.592 [2024-07-12 10:31:25.404595] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:31.592 [2024-07-12 10:31:25.404886] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:31.592 [2024-07-12 10:31:25.404907] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:31.592 [2024-07-12 10:31:25.405014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.592 pt3 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.592 10:31:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.850 10:31:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.850 "name": "raid_bdev1", 00:17:31.850 "uuid": "0bac5a1a-ed05-423d-aac7-040d07297c74", 00:17:31.850 "strip_size_kb": 64, 00:17:31.850 "state": "online", 00:17:31.850 "raid_level": "concat", 00:17:31.850 "superblock": true, 00:17:31.850 "num_base_bdevs": 3, 00:17:31.850 "num_base_bdevs_discovered": 3, 00:17:31.850 "num_base_bdevs_operational": 3, 00:17:31.850 "base_bdevs_list": [ 00:17:31.850 { 00:17:31.850 "name": "pt1", 00:17:31.850 "uuid": "aaa0ee7d-61b5-5f51-86ec-394300f812bb", 00:17:31.850 "is_configured": true, 00:17:31.850 "data_offset": 2048, 00:17:31.850 "data_size": 63488 00:17:31.850 }, 00:17:31.850 { 00:17:31.850 "name": "pt2", 00:17:31.851 "uuid": "c7046cd0-8ebf-5044-b381-865ef0b47295", 00:17:31.851 "is_configured": true, 00:17:31.851 "data_offset": 2048, 00:17:31.851 "data_size": 63488 00:17:31.851 }, 00:17:31.851 { 00:17:31.851 "name": "pt3", 00:17:31.851 "uuid": "dfb74ae8-809b-5e18-b80f-e1ab8221f1d2", 00:17:31.851 "is_configured": true, 00:17:31.851 "data_offset": 2048, 00:17:31.851 "data_size": 63488 00:17:31.851 } 00:17:31.851 ] 00:17:31.851 }' 00:17:31.851 10:31:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.851 10:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.787 [2024-07-12 10:31:26.616147] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@430 -- # '[' 0bac5a1a-ed05-423d-aac7-040d07297c74 '!=' 0bac5a1a-ed05-423d-aac7-040d07297c74 ']' 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:32.787 10:31:26 -- bdev/bdev_raid.sh@511 -- # killprocess 119824 00:17:32.787 10:31:26 -- common/autotest_common.sh@926 -- # '[' -z 119824 ']' 00:17:32.787 10:31:26 -- common/autotest_common.sh@930 -- # kill -0 119824 00:17:32.787 10:31:26 -- common/autotest_common.sh@931 -- # uname 00:17:32.787 10:31:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.787 10:31:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119824 00:17:32.787 killing process with pid 119824 00:17:32.787 10:31:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:32.787 10:31:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:32.787 10:31:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119824' 00:17:32.787 10:31:26 -- common/autotest_common.sh@945 -- # kill 119824 00:17:32.787 10:31:26 -- common/autotest_common.sh@950 -- # wait 119824 00:17:32.787 [2024-07-12 10:31:26.653220] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.787 [2024-07-12 10:31:26.653267] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.787 [2024-07-12 10:31:26.653308] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.787 [2024-07-12 10:31:26.653317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:33.046 [2024-07-12 10:31:26.851906] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.984 ************************************ 00:17:33.984 END TEST raid_superblock_test 00:17:33.984 ************************************ 00:17:33.984 10:31:27 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:33.984 00:17:33.984 real 0m10.371s 00:17:33.984 user 0m18.269s 00:17:33.984 sys 0m1.152s 00:17:33.984 10:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.984 10:31:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:34.244 10:31:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:34.244 10:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:34.244 10:31:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.244 ************************************ 00:17:34.244 START TEST raid_state_function_test 00:17:34.244 ************************************ 00:17:34.244 10:31:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:34.244 10:31:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=120150 00:17:34.245 Process raid pid: 120150 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120150' 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120150 /var/tmp/spdk-raid.sock 00:17:34.245 10:31:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:34.245 10:31:27 -- common/autotest_common.sh@819 -- # '[' -z 120150 ']' 00:17:34.245 10:31:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:34.245 10:31:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:34.245 10:31:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:34.245 10:31:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.245 10:31:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.245 [2024-07-12 10:31:27.988466] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:34.245 [2024-07-12 10:31:27.988660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.245 [2024-07-12 10:31:28.152358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.504 [2024-07-12 10:31:28.328178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.764 [2024-07-12 10:31:28.516385] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.332 10:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.332 10:31:28 -- common/autotest_common.sh@852 -- # return 0 00:17:35.332 10:31:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.332 [2024-07-12 10:31:29.192001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.332 [2024-07-12 10:31:29.192094] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.332 [2024-07-12 10:31:29.192107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.332 [2024-07-12 10:31:29.192125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.332 [2024-07-12 10:31:29.192133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.332 [2024-07-12 10:31:29.192172] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.332 10:31:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.590 10:31:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.590 "name": "Existed_Raid", 00:17:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.590 "strip_size_kb": 0, 00:17:35.590 "state": "configuring", 00:17:35.590 "raid_level": "raid1", 00:17:35.590 "superblock": false, 00:17:35.590 "num_base_bdevs": 3, 00:17:35.590 "num_base_bdevs_discovered": 0, 00:17:35.590 "num_base_bdevs_operational": 3, 00:17:35.590 "base_bdevs_list": [ 00:17:35.590 { 00:17:35.590 "name": "BaseBdev1", 00:17:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.590 "is_configured": false, 00:17:35.590 "data_offset": 0, 00:17:35.590 "data_size": 0 00:17:35.590 }, 00:17:35.590 { 00:17:35.590 "name": "BaseBdev2", 00:17:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.590 "is_configured": false, 00:17:35.590 "data_offset": 0, 00:17:35.590 "data_size": 0 00:17:35.590 }, 00:17:35.590 { 00:17:35.590 "name": "BaseBdev3", 00:17:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.590 "is_configured": false, 00:17:35.590 "data_offset": 0, 00:17:35.590 "data_size": 0 00:17:35.590 } 00:17:35.590 ] 00:17:35.590 }' 00:17:35.590 10:31:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.590 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 10:31:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.525 [2024-07-12 10:31:30.329099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.525 [2024-07-12 10:31:30.329130] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:36.525 10:31:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:36.784 [2024-07-12 10:31:30.557145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.784 [2024-07-12 10:31:30.557195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.784 [2024-07-12 10:31:30.557207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.784 [2024-07-12 10:31:30.557223] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.784 [2024-07-12 10:31:30.557230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.784 [2024-07-12 10:31:30.557259] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.784 10:31:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:37.042 [2024-07-12 10:31:30.774505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.042 BaseBdev1 00:17:37.042 10:31:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:37.042 10:31:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:37.042 10:31:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:37.042 10:31:30 -- common/autotest_common.sh@889 -- # local i 00:17:37.042 10:31:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:37.042 10:31:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:37.042 10:31:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.301 10:31:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.301 [ 00:17:37.301 { 00:17:37.301 "name": "BaseBdev1", 00:17:37.301 "aliases": [ 00:17:37.301 "7a0763da-a711-4c45-bace-ede128052eb7" 00:17:37.301 ], 00:17:37.301 "product_name": "Malloc disk", 00:17:37.301 "block_size": 512, 00:17:37.301 "num_blocks": 65536, 00:17:37.301 "uuid": "7a0763da-a711-4c45-bace-ede128052eb7", 00:17:37.301 "assigned_rate_limits": { 00:17:37.301 "rw_ios_per_sec": 0, 00:17:37.301 "rw_mbytes_per_sec": 0, 00:17:37.301 "r_mbytes_per_sec": 0, 00:17:37.301 "w_mbytes_per_sec": 0 00:17:37.301 }, 00:17:37.301 "claimed": true, 00:17:37.301 "claim_type": "exclusive_write", 00:17:37.301 "zoned": false, 00:17:37.301 "supported_io_types": { 00:17:37.301 "read": true, 00:17:37.301 "write": true, 00:17:37.301 "unmap": true, 00:17:37.301 "write_zeroes": true, 00:17:37.301 "flush": true, 00:17:37.301 "reset": true, 00:17:37.301 "compare": false, 00:17:37.301 "compare_and_write": false, 00:17:37.301 "abort": true, 00:17:37.301 "nvme_admin": false, 00:17:37.301 "nvme_io": false 00:17:37.301 }, 00:17:37.301 "memory_domains": [ 00:17:37.301 { 00:17:37.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.301 "dma_device_type": 2 00:17:37.301 } 00:17:37.301 ], 00:17:37.301 "driver_specific": {} 00:17:37.301 } 00:17:37.301 ] 00:17:37.301 10:31:31 -- common/autotest_common.sh@895 -- # return 0 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.301 10:31:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.302 10:31:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.560 10:31:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.560 "name": "Existed_Raid", 00:17:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.560 "strip_size_kb": 0, 00:17:37.560 "state": "configuring", 00:17:37.560 "raid_level": "raid1", 00:17:37.560 "superblock": false, 00:17:37.560 "num_base_bdevs": 3, 00:17:37.560 "num_base_bdevs_discovered": 1, 00:17:37.560 "num_base_bdevs_operational": 3, 00:17:37.560 "base_bdevs_list": [ 00:17:37.560 { 00:17:37.560 "name": "BaseBdev1", 00:17:37.560 "uuid": "7a0763da-a711-4c45-bace-ede128052eb7", 00:17:37.560 "is_configured": true, 00:17:37.560 "data_offset": 0, 00:17:37.560 "data_size": 65536 00:17:37.560 }, 00:17:37.560 { 00:17:37.560 "name": "BaseBdev2", 00:17:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.560 "is_configured": false, 00:17:37.560 "data_offset": 0, 00:17:37.560 "data_size": 0 00:17:37.560 }, 00:17:37.560 { 00:17:37.560 "name": "BaseBdev3", 00:17:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.560 "is_configured": false, 00:17:37.560 "data_offset": 0, 00:17:37.560 "data_size": 0 00:17:37.560 } 00:17:37.560 ] 00:17:37.560 }' 00:17:37.560 10:31:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.560 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:38.127 10:31:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:38.385 [2024-07-12 10:31:32.174732] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.385 [2024-07-12 10:31:32.174767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:38.385 10:31:32 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:38.385 10:31:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:38.642 [2024-07-12 10:31:32.350832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.642 [2024-07-12 10:31:32.352703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.642 [2024-07-12 10:31:32.352758] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.642 [2024-07-12 10:31:32.352768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.642 [2024-07-12 10:31:32.352793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.642 "name": "Existed_Raid", 00:17:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.642 "strip_size_kb": 0, 00:17:38.642 "state": "configuring", 00:17:38.642 "raid_level": "raid1", 00:17:38.642 "superblock": false, 00:17:38.642 "num_base_bdevs": 3, 00:17:38.642 "num_base_bdevs_discovered": 1, 00:17:38.642 "num_base_bdevs_operational": 3, 00:17:38.642 "base_bdevs_list": [ 00:17:38.642 { 00:17:38.642 "name": "BaseBdev1", 00:17:38.642 "uuid": "7a0763da-a711-4c45-bace-ede128052eb7", 00:17:38.642 "is_configured": true, 00:17:38.642 "data_offset": 0, 00:17:38.642 "data_size": 65536 00:17:38.642 }, 00:17:38.642 { 00:17:38.642 "name": "BaseBdev2", 00:17:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.642 "is_configured": false, 00:17:38.642 "data_offset": 0, 00:17:38.642 "data_size": 0 00:17:38.642 }, 00:17:38.642 { 00:17:38.642 "name": "BaseBdev3", 00:17:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.642 "is_configured": false, 00:17:38.642 "data_offset": 0, 00:17:38.642 "data_size": 0 00:17:38.642 } 00:17:38.642 ] 00:17:38.642 }' 00:17:38.642 10:31:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.642 10:31:32 -- common/autotest_common.sh@10 -- # set +x 00:17:39.576 10:31:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.576 [2024-07-12 10:31:33.441506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.576 BaseBdev2 00:17:39.576 10:31:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:39.576 10:31:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:39.576 10:31:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:39.576 10:31:33 -- common/autotest_common.sh@889 -- # local i 00:17:39.576 10:31:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:39.576 10:31:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:39.576 10:31:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.833 10:31:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:40.091 [ 00:17:40.091 { 00:17:40.091 "name": "BaseBdev2", 00:17:40.091 "aliases": [ 00:17:40.091 "9576276e-0831-462a-9a9d-7cd12265027e" 00:17:40.091 ], 00:17:40.091 "product_name": "Malloc disk", 00:17:40.091 "block_size": 512, 00:17:40.091 "num_blocks": 65536, 00:17:40.091 "uuid": "9576276e-0831-462a-9a9d-7cd12265027e", 00:17:40.091 "assigned_rate_limits": { 00:17:40.091 "rw_ios_per_sec": 0, 00:17:40.091 "rw_mbytes_per_sec": 0, 00:17:40.091 "r_mbytes_per_sec": 0, 00:17:40.091 "w_mbytes_per_sec": 0 00:17:40.091 }, 00:17:40.091 "claimed": true, 00:17:40.091 "claim_type": "exclusive_write", 00:17:40.091 "zoned": false, 00:17:40.091 "supported_io_types": { 00:17:40.091 "read": true, 00:17:40.091 "write": true, 00:17:40.091 "unmap": true, 00:17:40.091 "write_zeroes": true, 00:17:40.091 "flush": true, 00:17:40.091 "reset": true, 00:17:40.091 "compare": false, 00:17:40.091 "compare_and_write": false, 00:17:40.091 "abort": true, 00:17:40.091 "nvme_admin": false, 00:17:40.091 "nvme_io": false 00:17:40.091 }, 00:17:40.091 "memory_domains": [ 00:17:40.091 { 00:17:40.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.091 "dma_device_type": 2 00:17:40.091 } 00:17:40.091 ], 00:17:40.091 "driver_specific": {} 00:17:40.091 } 00:17:40.091 ] 00:17:40.091 10:31:33 -- common/autotest_common.sh@895 -- # return 0 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.091 10:31:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.350 10:31:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.350 "name": "Existed_Raid", 00:17:40.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.350 "strip_size_kb": 0, 00:17:40.350 "state": "configuring", 00:17:40.350 "raid_level": "raid1", 00:17:40.350 "superblock": false, 00:17:40.350 "num_base_bdevs": 3, 00:17:40.350 "num_base_bdevs_discovered": 2, 00:17:40.350 "num_base_bdevs_operational": 3, 00:17:40.350 "base_bdevs_list": [ 00:17:40.350 { 00:17:40.350 "name": "BaseBdev1", 00:17:40.350 "uuid": "7a0763da-a711-4c45-bace-ede128052eb7", 00:17:40.350 "is_configured": true, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 }, 00:17:40.350 { 00:17:40.350 "name": "BaseBdev2", 00:17:40.350 "uuid": "9576276e-0831-462a-9a9d-7cd12265027e", 00:17:40.350 "is_configured": true, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 65536 00:17:40.350 }, 00:17:40.350 { 00:17:40.350 "name": "BaseBdev3", 00:17:40.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.350 "is_configured": false, 00:17:40.350 "data_offset": 0, 00:17:40.350 "data_size": 0 00:17:40.350 } 00:17:40.350 ] 00:17:40.350 }' 00:17:40.350 10:31:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.350 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 10:31:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:41.174 [2024-07-12 10:31:35.085282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.174 [2024-07-12 10:31:35.085334] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:41.174 [2024-07-12 10:31:35.085343] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:41.174 [2024-07-12 10:31:35.085471] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:41.174 [2024-07-12 10:31:35.085807] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:41.174 [2024-07-12 10:31:35.085828] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:41.174 [2024-07-12 10:31:35.086051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.174 BaseBdev3 00:17:41.431 10:31:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:41.431 10:31:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:41.431 10:31:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:41.431 10:31:35 -- common/autotest_common.sh@889 -- # local i 00:17:41.431 10:31:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:41.431 10:31:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:41.431 10:31:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.431 10:31:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:41.689 [ 00:17:41.689 { 00:17:41.689 "name": "BaseBdev3", 00:17:41.689 "aliases": [ 00:17:41.689 "6f64bf46-cac9-415e-9c0b-fdcbc95c91c7" 00:17:41.689 ], 00:17:41.689 "product_name": "Malloc disk", 00:17:41.689 "block_size": 512, 00:17:41.689 "num_blocks": 65536, 00:17:41.689 "uuid": "6f64bf46-cac9-415e-9c0b-fdcbc95c91c7", 00:17:41.689 "assigned_rate_limits": { 00:17:41.689 "rw_ios_per_sec": 0, 00:17:41.689 "rw_mbytes_per_sec": 0, 00:17:41.689 "r_mbytes_per_sec": 0, 00:17:41.689 "w_mbytes_per_sec": 0 00:17:41.689 }, 00:17:41.689 "claimed": true, 00:17:41.689 "claim_type": "exclusive_write", 00:17:41.689 "zoned": false, 00:17:41.689 "supported_io_types": { 00:17:41.689 "read": true, 00:17:41.689 "write": true, 00:17:41.689 "unmap": true, 00:17:41.689 "write_zeroes": true, 00:17:41.689 "flush": true, 00:17:41.689 "reset": true, 00:17:41.689 "compare": false, 00:17:41.689 "compare_and_write": false, 00:17:41.690 "abort": true, 00:17:41.690 "nvme_admin": false, 00:17:41.690 "nvme_io": false 00:17:41.690 }, 00:17:41.690 "memory_domains": [ 00:17:41.690 { 00:17:41.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.690 "dma_device_type": 2 00:17:41.690 } 00:17:41.690 ], 00:17:41.690 "driver_specific": {} 00:17:41.690 } 00:17:41.690 ] 00:17:41.690 10:31:35 -- common/autotest_common.sh@895 -- # return 0 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.690 10:31:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.947 10:31:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.947 "name": "Existed_Raid", 00:17:41.947 "uuid": "73687a6a-3acf-44eb-998e-791ab9cb7bc8", 00:17:41.947 "strip_size_kb": 0, 00:17:41.947 "state": "online", 00:17:41.947 "raid_level": "raid1", 00:17:41.947 "superblock": false, 00:17:41.947 "num_base_bdevs": 3, 00:17:41.947 "num_base_bdevs_discovered": 3, 00:17:41.947 "num_base_bdevs_operational": 3, 00:17:41.947 "base_bdevs_list": [ 00:17:41.948 { 00:17:41.948 "name": "BaseBdev1", 00:17:41.948 "uuid": "7a0763da-a711-4c45-bace-ede128052eb7", 00:17:41.948 "is_configured": true, 00:17:41.948 "data_offset": 0, 00:17:41.948 "data_size": 65536 00:17:41.948 }, 00:17:41.948 { 00:17:41.948 "name": "BaseBdev2", 00:17:41.948 "uuid": "9576276e-0831-462a-9a9d-7cd12265027e", 00:17:41.948 "is_configured": true, 00:17:41.948 "data_offset": 0, 00:17:41.948 "data_size": 65536 00:17:41.948 }, 00:17:41.948 { 00:17:41.948 "name": "BaseBdev3", 00:17:41.948 "uuid": "6f64bf46-cac9-415e-9c0b-fdcbc95c91c7", 00:17:41.948 "is_configured": true, 00:17:41.948 "data_offset": 0, 00:17:41.948 "data_size": 65536 00:17:41.948 } 00:17:41.948 ] 00:17:41.948 }' 00:17:41.948 10:31:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.948 10:31:35 -- common/autotest_common.sh@10 -- # set +x 00:17:42.514 10:31:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:42.772 [2024-07-12 10:31:36.557599] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.772 10:31:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.030 10:31:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.030 "name": "Existed_Raid", 00:17:43.030 "uuid": "73687a6a-3acf-44eb-998e-791ab9cb7bc8", 00:17:43.030 "strip_size_kb": 0, 00:17:43.030 "state": "online", 00:17:43.030 "raid_level": "raid1", 00:17:43.030 "superblock": false, 00:17:43.030 "num_base_bdevs": 3, 00:17:43.030 "num_base_bdevs_discovered": 2, 00:17:43.030 "num_base_bdevs_operational": 2, 00:17:43.030 "base_bdevs_list": [ 00:17:43.030 { 00:17:43.030 "name": null, 00:17:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.030 "is_configured": false, 00:17:43.030 "data_offset": 0, 00:17:43.030 "data_size": 65536 00:17:43.030 }, 00:17:43.030 { 00:17:43.030 "name": "BaseBdev2", 00:17:43.030 "uuid": "9576276e-0831-462a-9a9d-7cd12265027e", 00:17:43.030 "is_configured": true, 00:17:43.030 "data_offset": 0, 00:17:43.030 "data_size": 65536 00:17:43.030 }, 00:17:43.030 { 00:17:43.030 "name": "BaseBdev3", 00:17:43.030 "uuid": "6f64bf46-cac9-415e-9c0b-fdcbc95c91c7", 00:17:43.030 "is_configured": true, 00:17:43.030 "data_offset": 0, 00:17:43.030 "data_size": 65536 00:17:43.030 } 00:17:43.030 ] 00:17:43.030 }' 00:17:43.030 10:31:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.030 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:43.596 10:31:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:43.596 10:31:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:43.596 10:31:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.596 10:31:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:43.853 10:31:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:43.854 10:31:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.854 10:31:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:44.112 [2024-07-12 10:31:37.892545] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:44.112 10:31:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:44.112 10:31:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:44.112 10:31:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.112 10:31:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:44.370 10:31:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:44.370 10:31:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:44.370 10:31:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:44.627 [2024-07-12 10:31:38.395287] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:44.627 [2024-07-12 10:31:38.395512] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.627 [2024-07-12 10:31:38.395680] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.627 [2024-07-12 10:31:38.462062] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.627 [2024-07-12 10:31:38.462240] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:44.627 10:31:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:44.627 10:31:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:44.628 10:31:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.628 10:31:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:44.885 10:31:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:44.885 10:31:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:44.885 10:31:38 -- bdev/bdev_raid.sh@287 -- # killprocess 120150 00:17:44.885 10:31:38 -- common/autotest_common.sh@926 -- # '[' -z 120150 ']' 00:17:44.885 10:31:38 -- common/autotest_common.sh@930 -- # kill -0 120150 00:17:44.885 10:31:38 -- common/autotest_common.sh@931 -- # uname 00:17:44.885 10:31:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:44.885 10:31:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120150 00:17:44.885 killing process with pid 120150 00:17:44.885 10:31:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:44.885 10:31:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:44.885 10:31:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120150' 00:17:44.885 10:31:38 -- common/autotest_common.sh@945 -- # kill 120150 00:17:44.885 10:31:38 -- common/autotest_common.sh@950 -- # wait 120150 00:17:44.885 [2024-07-12 10:31:38.716891] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.885 [2024-07-12 10:31:38.716989] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.821 ************************************ 00:17:45.821 END TEST raid_state_function_test 00:17:45.821 ************************************ 00:17:45.821 10:31:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:45.821 00:17:45.821 real 0m11.813s 00:17:45.821 user 0m21.091s 00:17:45.821 sys 0m1.278s 00:17:45.821 10:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.821 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:46.079 10:31:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:46.079 10:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:46.079 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:46.079 ************************************ 00:17:46.079 START TEST raid_state_function_test_sb 00:17:46.079 ************************************ 00:17:46.079 10:31:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:46.079 Process raid pid: 120535 00:17:46.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=120535 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120535' 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120535 /var/tmp/spdk-raid.sock 00:17:46.079 10:31:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:46.079 10:31:39 -- common/autotest_common.sh@819 -- # '[' -z 120535 ']' 00:17:46.079 10:31:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.079 10:31:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:46.079 10:31:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.079 10:31:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:46.079 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:46.079 [2024-07-12 10:31:39.854817] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:46.079 [2024-07-12 10:31:39.855247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.338 [2024-07-12 10:31:40.021824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.338 [2024-07-12 10:31:40.196693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.596 [2024-07-12 10:31:40.385720] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.854 10:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:46.854 10:31:40 -- common/autotest_common.sh@852 -- # return 0 00:17:46.854 10:31:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:47.113 [2024-07-12 10:31:40.881405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.113 [2024-07-12 10:31:40.881747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.113 [2024-07-12 10:31:40.881866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.113 [2024-07-12 10:31:40.881991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.113 [2024-07-12 10:31:40.882081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.113 [2024-07-12 10:31:40.882156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.113 10:31:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.372 10:31:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.372 "name": "Existed_Raid", 00:17:47.372 "uuid": "f4aa4915-a7d5-4921-8732-ce1ae8874aa0", 00:17:47.372 "strip_size_kb": 0, 00:17:47.372 "state": "configuring", 00:17:47.372 "raid_level": "raid1", 00:17:47.372 "superblock": true, 00:17:47.372 "num_base_bdevs": 3, 00:17:47.372 "num_base_bdevs_discovered": 0, 00:17:47.372 "num_base_bdevs_operational": 3, 00:17:47.372 "base_bdevs_list": [ 00:17:47.372 { 00:17:47.372 "name": "BaseBdev1", 00:17:47.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.372 "is_configured": false, 00:17:47.372 "data_offset": 0, 00:17:47.372 "data_size": 0 00:17:47.372 }, 00:17:47.372 { 00:17:47.372 "name": "BaseBdev2", 00:17:47.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.372 "is_configured": false, 00:17:47.372 "data_offset": 0, 00:17:47.372 "data_size": 0 00:17:47.372 }, 00:17:47.372 { 00:17:47.372 "name": "BaseBdev3", 00:17:47.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.372 "is_configured": false, 00:17:47.372 "data_offset": 0, 00:17:47.372 "data_size": 0 00:17:47.372 } 00:17:47.372 ] 00:17:47.372 }' 00:17:47.372 10:31:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.372 10:31:41 -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 10:31:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:48.210 [2024-07-12 10:31:41.937411] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.210 [2024-07-12 10:31:41.937565] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:48.210 10:31:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:48.210 [2024-07-12 10:31:42.113484] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.210 [2024-07-12 10:31:42.113671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.210 [2024-07-12 10:31:42.113770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.210 [2024-07-12 10:31:42.113825] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.210 [2024-07-12 10:31:42.113851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.210 [2024-07-12 10:31:42.113989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.485 10:31:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:48.485 [2024-07-12 10:31:42.338928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.485 BaseBdev1 00:17:48.485 10:31:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:48.485 10:31:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:48.485 10:31:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:48.485 10:31:42 -- common/autotest_common.sh@889 -- # local i 00:17:48.485 10:31:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:48.485 10:31:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:48.485 10:31:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:48.751 10:31:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.009 [ 00:17:49.009 { 00:17:49.009 "name": "BaseBdev1", 00:17:49.009 "aliases": [ 00:17:49.009 "a619832c-09dc-4d69-a177-35f5165c0e2f" 00:17:49.009 ], 00:17:49.009 "product_name": "Malloc disk", 00:17:49.009 "block_size": 512, 00:17:49.009 "num_blocks": 65536, 00:17:49.009 "uuid": "a619832c-09dc-4d69-a177-35f5165c0e2f", 00:17:49.009 "assigned_rate_limits": { 00:17:49.009 "rw_ios_per_sec": 0, 00:17:49.009 "rw_mbytes_per_sec": 0, 00:17:49.009 "r_mbytes_per_sec": 0, 00:17:49.009 "w_mbytes_per_sec": 0 00:17:49.009 }, 00:17:49.009 "claimed": true, 00:17:49.009 "claim_type": "exclusive_write", 00:17:49.009 "zoned": false, 00:17:49.009 "supported_io_types": { 00:17:49.009 "read": true, 00:17:49.009 "write": true, 00:17:49.009 "unmap": true, 00:17:49.009 "write_zeroes": true, 00:17:49.009 "flush": true, 00:17:49.009 "reset": true, 00:17:49.009 "compare": false, 00:17:49.009 "compare_and_write": false, 00:17:49.009 "abort": true, 00:17:49.009 "nvme_admin": false, 00:17:49.009 "nvme_io": false 00:17:49.009 }, 00:17:49.009 "memory_domains": [ 00:17:49.009 { 00:17:49.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.009 "dma_device_type": 2 00:17:49.009 } 00:17:49.009 ], 00:17:49.009 "driver_specific": {} 00:17:49.009 } 00:17:49.009 ] 00:17:49.009 10:31:42 -- common/autotest_common.sh@895 -- # return 0 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.009 10:31:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.270 10:31:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.270 "name": "Existed_Raid", 00:17:49.270 "uuid": "9e16d01f-7357-4b92-923c-167a2f6a957d", 00:17:49.270 "strip_size_kb": 0, 00:17:49.270 "state": "configuring", 00:17:49.270 "raid_level": "raid1", 00:17:49.270 "superblock": true, 00:17:49.270 "num_base_bdevs": 3, 00:17:49.270 "num_base_bdevs_discovered": 1, 00:17:49.270 "num_base_bdevs_operational": 3, 00:17:49.270 "base_bdevs_list": [ 00:17:49.270 { 00:17:49.270 "name": "BaseBdev1", 00:17:49.270 "uuid": "a619832c-09dc-4d69-a177-35f5165c0e2f", 00:17:49.270 "is_configured": true, 00:17:49.270 "data_offset": 2048, 00:17:49.270 "data_size": 63488 00:17:49.270 }, 00:17:49.270 { 00:17:49.270 "name": "BaseBdev2", 00:17:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.270 "is_configured": false, 00:17:49.270 "data_offset": 0, 00:17:49.270 "data_size": 0 00:17:49.270 }, 00:17:49.270 { 00:17:49.270 "name": "BaseBdev3", 00:17:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.270 "is_configured": false, 00:17:49.270 "data_offset": 0, 00:17:49.270 "data_size": 0 00:17:49.270 } 00:17:49.270 ] 00:17:49.270 }' 00:17:49.270 10:31:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.270 10:31:42 -- common/autotest_common.sh@10 -- # set +x 00:17:49.834 10:31:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.834 [2024-07-12 10:31:43.739171] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.834 [2024-07-12 10:31:43.739322] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:50.093 10:31:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:50.093 10:31:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:50.350 10:31:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:50.350 BaseBdev1 00:17:50.350 10:31:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:50.350 10:31:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:50.350 10:31:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:50.350 10:31:44 -- common/autotest_common.sh@889 -- # local i 00:17:50.350 10:31:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:50.350 10:31:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:50.350 10:31:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.607 10:31:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.865 [ 00:17:50.865 { 00:17:50.865 "name": "BaseBdev1", 00:17:50.865 "aliases": [ 00:17:50.865 "66594fc0-a3ae-4647-88f4-6b9bf574357f" 00:17:50.865 ], 00:17:50.865 "product_name": "Malloc disk", 00:17:50.865 "block_size": 512, 00:17:50.865 "num_blocks": 65536, 00:17:50.865 "uuid": "66594fc0-a3ae-4647-88f4-6b9bf574357f", 00:17:50.865 "assigned_rate_limits": { 00:17:50.865 "rw_ios_per_sec": 0, 00:17:50.865 "rw_mbytes_per_sec": 0, 00:17:50.865 "r_mbytes_per_sec": 0, 00:17:50.865 "w_mbytes_per_sec": 0 00:17:50.865 }, 00:17:50.865 "claimed": false, 00:17:50.865 "zoned": false, 00:17:50.865 "supported_io_types": { 00:17:50.865 "read": true, 00:17:50.865 "write": true, 00:17:50.865 "unmap": true, 00:17:50.865 "write_zeroes": true, 00:17:50.865 "flush": true, 00:17:50.865 "reset": true, 00:17:50.865 "compare": false, 00:17:50.865 "compare_and_write": false, 00:17:50.865 "abort": true, 00:17:50.865 "nvme_admin": false, 00:17:50.865 "nvme_io": false 00:17:50.865 }, 00:17:50.865 "memory_domains": [ 00:17:50.865 { 00:17:50.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.865 "dma_device_type": 2 00:17:50.865 } 00:17:50.865 ], 00:17:50.865 "driver_specific": {} 00:17:50.865 } 00:17:50.865 ] 00:17:50.865 10:31:44 -- common/autotest_common.sh@895 -- # return 0 00:17:50.865 10:31:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:51.123 [2024-07-12 10:31:44.783527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.123 [2024-07-12 10:31:44.785861] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.123 [2024-07-12 10:31:44.786033] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.123 [2024-07-12 10:31:44.786164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.123 [2024-07-12 10:31:44.786225] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.123 10:31:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.380 10:31:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.380 "name": "Existed_Raid", 00:17:51.380 "uuid": "46b0a199-2556-4d50-964f-5afaa7d19bb7", 00:17:51.380 "strip_size_kb": 0, 00:17:51.380 "state": "configuring", 00:17:51.380 "raid_level": "raid1", 00:17:51.380 "superblock": true, 00:17:51.380 "num_base_bdevs": 3, 00:17:51.380 "num_base_bdevs_discovered": 1, 00:17:51.380 "num_base_bdevs_operational": 3, 00:17:51.380 "base_bdevs_list": [ 00:17:51.380 { 00:17:51.380 "name": "BaseBdev1", 00:17:51.380 "uuid": "66594fc0-a3ae-4647-88f4-6b9bf574357f", 00:17:51.380 "is_configured": true, 00:17:51.380 "data_offset": 2048, 00:17:51.380 "data_size": 63488 00:17:51.380 }, 00:17:51.380 { 00:17:51.380 "name": "BaseBdev2", 00:17:51.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.380 "is_configured": false, 00:17:51.380 "data_offset": 0, 00:17:51.380 "data_size": 0 00:17:51.380 }, 00:17:51.381 { 00:17:51.381 "name": "BaseBdev3", 00:17:51.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.381 "is_configured": false, 00:17:51.381 "data_offset": 0, 00:17:51.381 "data_size": 0 00:17:51.381 } 00:17:51.381 ] 00:17:51.381 }' 00:17:51.381 10:31:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.381 10:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 10:31:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:52.204 [2024-07-12 10:31:45.910442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.204 BaseBdev2 00:17:52.204 10:31:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:52.204 10:31:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:52.204 10:31:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:52.204 10:31:45 -- common/autotest_common.sh@889 -- # local i 00:17:52.204 10:31:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:52.204 10:31:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:52.205 10:31:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.205 10:31:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:52.462 [ 00:17:52.462 { 00:17:52.462 "name": "BaseBdev2", 00:17:52.462 "aliases": [ 00:17:52.462 "123d01b0-5ea0-4fb0-bab3-7ba7b8b3f4e9" 00:17:52.462 ], 00:17:52.462 "product_name": "Malloc disk", 00:17:52.462 "block_size": 512, 00:17:52.462 "num_blocks": 65536, 00:17:52.462 "uuid": "123d01b0-5ea0-4fb0-bab3-7ba7b8b3f4e9", 00:17:52.462 "assigned_rate_limits": { 00:17:52.462 "rw_ios_per_sec": 0, 00:17:52.462 "rw_mbytes_per_sec": 0, 00:17:52.462 "r_mbytes_per_sec": 0, 00:17:52.462 "w_mbytes_per_sec": 0 00:17:52.462 }, 00:17:52.462 "claimed": true, 00:17:52.462 "claim_type": "exclusive_write", 00:17:52.462 "zoned": false, 00:17:52.462 "supported_io_types": { 00:17:52.462 "read": true, 00:17:52.462 "write": true, 00:17:52.462 "unmap": true, 00:17:52.462 "write_zeroes": true, 00:17:52.462 "flush": true, 00:17:52.462 "reset": true, 00:17:52.462 "compare": false, 00:17:52.462 "compare_and_write": false, 00:17:52.462 "abort": true, 00:17:52.462 "nvme_admin": false, 00:17:52.462 "nvme_io": false 00:17:52.462 }, 00:17:52.462 "memory_domains": [ 00:17:52.462 { 00:17:52.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.462 "dma_device_type": 2 00:17:52.462 } 00:17:52.462 ], 00:17:52.462 "driver_specific": {} 00:17:52.462 } 00:17:52.462 ] 00:17:52.462 10:31:46 -- common/autotest_common.sh@895 -- # return 0 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.462 10:31:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.720 10:31:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.720 "name": "Existed_Raid", 00:17:52.720 "uuid": "46b0a199-2556-4d50-964f-5afaa7d19bb7", 00:17:52.720 "strip_size_kb": 0, 00:17:52.720 "state": "configuring", 00:17:52.720 "raid_level": "raid1", 00:17:52.720 "superblock": true, 00:17:52.720 "num_base_bdevs": 3, 00:17:52.720 "num_base_bdevs_discovered": 2, 00:17:52.720 "num_base_bdevs_operational": 3, 00:17:52.720 "base_bdevs_list": [ 00:17:52.720 { 00:17:52.720 "name": "BaseBdev1", 00:17:52.720 "uuid": "66594fc0-a3ae-4647-88f4-6b9bf574357f", 00:17:52.720 "is_configured": true, 00:17:52.720 "data_offset": 2048, 00:17:52.720 "data_size": 63488 00:17:52.720 }, 00:17:52.720 { 00:17:52.720 "name": "BaseBdev2", 00:17:52.720 "uuid": "123d01b0-5ea0-4fb0-bab3-7ba7b8b3f4e9", 00:17:52.720 "is_configured": true, 00:17:52.720 "data_offset": 2048, 00:17:52.720 "data_size": 63488 00:17:52.720 }, 00:17:52.720 { 00:17:52.720 "name": "BaseBdev3", 00:17:52.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.720 "is_configured": false, 00:17:52.720 "data_offset": 0, 00:17:52.720 "data_size": 0 00:17:52.720 } 00:17:52.720 ] 00:17:52.720 }' 00:17:52.720 10:31:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.720 10:31:46 -- common/autotest_common.sh@10 -- # set +x 00:17:53.307 10:31:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:53.565 [2024-07-12 10:31:47.406172] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.565 [2024-07-12 10:31:47.406558] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:53.565 [2024-07-12 10:31:47.406673] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:53.565 [2024-07-12 10:31:47.406828] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:53.565 BaseBdev3 00:17:53.565 [2024-07-12 10:31:47.407275] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:53.565 [2024-07-12 10:31:47.407289] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:53.565 [2024-07-12 10:31:47.407448] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.565 10:31:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:53.565 10:31:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:53.565 10:31:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:53.565 10:31:47 -- common/autotest_common.sh@889 -- # local i 00:17:53.565 10:31:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:53.565 10:31:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:53.565 10:31:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.823 10:31:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:54.081 [ 00:17:54.081 { 00:17:54.081 "name": "BaseBdev3", 00:17:54.081 "aliases": [ 00:17:54.081 "bf7546d7-f4cd-4fdb-89c0-9fefc8efd5b9" 00:17:54.081 ], 00:17:54.081 "product_name": "Malloc disk", 00:17:54.081 "block_size": 512, 00:17:54.081 "num_blocks": 65536, 00:17:54.081 "uuid": "bf7546d7-f4cd-4fdb-89c0-9fefc8efd5b9", 00:17:54.081 "assigned_rate_limits": { 00:17:54.081 "rw_ios_per_sec": 0, 00:17:54.081 "rw_mbytes_per_sec": 0, 00:17:54.081 "r_mbytes_per_sec": 0, 00:17:54.081 "w_mbytes_per_sec": 0 00:17:54.081 }, 00:17:54.081 "claimed": true, 00:17:54.081 "claim_type": "exclusive_write", 00:17:54.081 "zoned": false, 00:17:54.081 "supported_io_types": { 00:17:54.081 "read": true, 00:17:54.081 "write": true, 00:17:54.081 "unmap": true, 00:17:54.081 "write_zeroes": true, 00:17:54.081 "flush": true, 00:17:54.081 "reset": true, 00:17:54.081 "compare": false, 00:17:54.081 "compare_and_write": false, 00:17:54.081 "abort": true, 00:17:54.081 "nvme_admin": false, 00:17:54.081 "nvme_io": false 00:17:54.081 }, 00:17:54.081 "memory_domains": [ 00:17:54.081 { 00:17:54.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.081 "dma_device_type": 2 00:17:54.081 } 00:17:54.081 ], 00:17:54.081 "driver_specific": {} 00:17:54.081 } 00:17:54.081 ] 00:17:54.081 10:31:47 -- common/autotest_common.sh@895 -- # return 0 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.081 10:31:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.340 10:31:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.340 "name": "Existed_Raid", 00:17:54.340 "uuid": "46b0a199-2556-4d50-964f-5afaa7d19bb7", 00:17:54.340 "strip_size_kb": 0, 00:17:54.340 "state": "online", 00:17:54.340 "raid_level": "raid1", 00:17:54.340 "superblock": true, 00:17:54.340 "num_base_bdevs": 3, 00:17:54.340 "num_base_bdevs_discovered": 3, 00:17:54.340 "num_base_bdevs_operational": 3, 00:17:54.340 "base_bdevs_list": [ 00:17:54.340 { 00:17:54.340 "name": "BaseBdev1", 00:17:54.340 "uuid": "66594fc0-a3ae-4647-88f4-6b9bf574357f", 00:17:54.340 "is_configured": true, 00:17:54.340 "data_offset": 2048, 00:17:54.340 "data_size": 63488 00:17:54.340 }, 00:17:54.340 { 00:17:54.340 "name": "BaseBdev2", 00:17:54.340 "uuid": "123d01b0-5ea0-4fb0-bab3-7ba7b8b3f4e9", 00:17:54.340 "is_configured": true, 00:17:54.340 "data_offset": 2048, 00:17:54.340 "data_size": 63488 00:17:54.340 }, 00:17:54.340 { 00:17:54.340 "name": "BaseBdev3", 00:17:54.340 "uuid": "bf7546d7-f4cd-4fdb-89c0-9fefc8efd5b9", 00:17:54.340 "is_configured": true, 00:17:54.340 "data_offset": 2048, 00:17:54.340 "data_size": 63488 00:17:54.340 } 00:17:54.340 ] 00:17:54.340 }' 00:17:54.340 10:31:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.340 10:31:48 -- common/autotest_common.sh@10 -- # set +x 00:17:54.906 10:31:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.164 [2024-07-12 10:31:48.858483] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.164 10:31:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.422 10:31:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.422 "name": "Existed_Raid", 00:17:55.422 "uuid": "46b0a199-2556-4d50-964f-5afaa7d19bb7", 00:17:55.422 "strip_size_kb": 0, 00:17:55.422 "state": "online", 00:17:55.422 "raid_level": "raid1", 00:17:55.422 "superblock": true, 00:17:55.422 "num_base_bdevs": 3, 00:17:55.422 "num_base_bdevs_discovered": 2, 00:17:55.422 "num_base_bdevs_operational": 2, 00:17:55.422 "base_bdevs_list": [ 00:17:55.422 { 00:17:55.422 "name": null, 00:17:55.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.422 "is_configured": false, 00:17:55.422 "data_offset": 2048, 00:17:55.422 "data_size": 63488 00:17:55.422 }, 00:17:55.422 { 00:17:55.422 "name": "BaseBdev2", 00:17:55.422 "uuid": "123d01b0-5ea0-4fb0-bab3-7ba7b8b3f4e9", 00:17:55.422 "is_configured": true, 00:17:55.422 "data_offset": 2048, 00:17:55.422 "data_size": 63488 00:17:55.422 }, 00:17:55.422 { 00:17:55.422 "name": "BaseBdev3", 00:17:55.422 "uuid": "bf7546d7-f4cd-4fdb-89c0-9fefc8efd5b9", 00:17:55.422 "is_configured": true, 00:17:55.422 "data_offset": 2048, 00:17:55.422 "data_size": 63488 00:17:55.422 } 00:17:55.422 ] 00:17:55.422 }' 00:17:55.422 10:31:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.422 10:31:49 -- common/autotest_common.sh@10 -- # set +x 00:17:55.988 10:31:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:55.988 10:31:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:55.988 10:31:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.988 10:31:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:56.247 10:31:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:56.247 10:31:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.247 10:31:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:56.505 [2024-07-12 10:31:50.168945] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.505 10:31:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:56.505 10:31:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.505 10:31:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.505 10:31:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:56.764 10:31:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:56.764 10:31:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.764 10:31:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:56.764 [2024-07-12 10:31:50.660717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:56.764 [2024-07-12 10:31:50.660920] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.764 [2024-07-12 10:31:50.661097] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.021 [2024-07-12 10:31:50.728769] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.021 [2024-07-12 10:31:50.728949] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:57.021 10:31:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:57.021 10:31:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.021 10:31:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:57.021 10:31:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.279 10:31:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:57.279 10:31:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:57.279 10:31:50 -- bdev/bdev_raid.sh@287 -- # killprocess 120535 00:17:57.279 10:31:50 -- common/autotest_common.sh@926 -- # '[' -z 120535 ']' 00:17:57.279 10:31:50 -- common/autotest_common.sh@930 -- # kill -0 120535 00:17:57.279 10:31:50 -- common/autotest_common.sh@931 -- # uname 00:17:57.279 10:31:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.279 10:31:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120535 00:17:57.279 killing process with pid 120535 00:17:57.279 10:31:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:57.279 10:31:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:57.279 10:31:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120535' 00:17:57.279 10:31:50 -- common/autotest_common.sh@945 -- # kill 120535 00:17:57.279 10:31:50 -- common/autotest_common.sh@950 -- # wait 120535 00:17:57.279 [2024-07-12 10:31:50.993629] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.279 [2024-07-12 10:31:50.993738] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.215 ************************************ 00:17:58.215 END TEST raid_state_function_test_sb 00:17:58.215 ************************************ 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:58.215 00:17:58.215 real 0m12.221s 00:17:58.215 user 0m21.807s 00:17:58.215 sys 0m1.291s 00:17:58.215 10:31:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.215 10:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:58.215 10:31:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:58.215 10:31:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:58.215 10:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:58.215 ************************************ 00:17:58.215 START TEST raid_superblock_test 00:17:58.215 ************************************ 00:17:58.215 10:31:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=120937 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120937 /var/tmp/spdk-raid.sock 00:17:58.215 10:31:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:58.215 10:31:52 -- common/autotest_common.sh@819 -- # '[' -z 120937 ']' 00:17:58.215 10:31:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:58.215 10:31:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:58.215 10:31:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:58.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:58.215 10:31:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:58.215 10:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:58.474 [2024-07-12 10:31:52.133808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:58.474 [2024-07-12 10:31:52.134260] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120937 ] 00:17:58.474 [2024-07-12 10:31:52.286663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.733 [2024-07-12 10:31:52.468976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.992 [2024-07-12 10:31:52.653997] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.249 10:31:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:59.249 10:31:53 -- common/autotest_common.sh@852 -- # return 0 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.249 10:31:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:59.507 malloc1 00:17:59.507 10:31:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.765 [2024-07-12 10:31:53.441652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.765 [2024-07-12 10:31:53.442011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.765 [2024-07-12 10:31:53.442075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:59.765 [2024-07-12 10:31:53.442382] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.765 [2024-07-12 10:31:53.444616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.765 [2024-07-12 10:31:53.444771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.765 pt1 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.765 10:31:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:00.023 malloc2 00:18:00.023 10:31:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.281 [2024-07-12 10:31:53.998595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.281 [2024-07-12 10:31:53.998788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.281 [2024-07-12 10:31:53.998863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:00.281 [2024-07-12 10:31:53.999012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.281 [2024-07-12 10:31:54.001054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.281 [2024-07-12 10:31:54.001237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.281 pt2 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.281 10:31:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:00.540 malloc3 00:18:00.540 10:31:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.540 [2024-07-12 10:31:54.400202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.540 [2024-07-12 10:31:54.400385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.540 [2024-07-12 10:31:54.400454] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:00.540 [2024-07-12 10:31:54.400584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.540 [2024-07-12 10:31:54.402763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.540 [2024-07-12 10:31:54.402933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.540 pt3 00:18:00.540 10:31:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:00.540 10:31:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.540 10:31:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:00.798 [2024-07-12 10:31:54.632281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.798 [2024-07-12 10:31:54.633975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.798 [2024-07-12 10:31:54.634160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.798 [2024-07-12 10:31:54.634386] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:00.798 [2024-07-12 10:31:54.634496] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:00.798 [2024-07-12 10:31:54.634651] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:00.798 [2024-07-12 10:31:54.635115] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:00.799 [2024-07-12 10:31:54.635249] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:00.799 [2024-07-12 10:31:54.635504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.799 10:31:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.057 10:31:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.057 "name": "raid_bdev1", 00:18:01.057 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:01.057 "strip_size_kb": 0, 00:18:01.057 "state": "online", 00:18:01.057 "raid_level": "raid1", 00:18:01.057 "superblock": true, 00:18:01.057 "num_base_bdevs": 3, 00:18:01.057 "num_base_bdevs_discovered": 3, 00:18:01.057 "num_base_bdevs_operational": 3, 00:18:01.057 "base_bdevs_list": [ 00:18:01.057 { 00:18:01.057 "name": "pt1", 00:18:01.057 "uuid": "04fce7c6-c7ce-57a8-a0f0-50716869a212", 00:18:01.057 "is_configured": true, 00:18:01.057 "data_offset": 2048, 00:18:01.057 "data_size": 63488 00:18:01.057 }, 00:18:01.057 { 00:18:01.057 "name": "pt2", 00:18:01.057 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:01.057 "is_configured": true, 00:18:01.057 "data_offset": 2048, 00:18:01.057 "data_size": 63488 00:18:01.057 }, 00:18:01.057 { 00:18:01.057 "name": "pt3", 00:18:01.057 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:01.057 "is_configured": true, 00:18:01.057 "data_offset": 2048, 00:18:01.057 "data_size": 63488 00:18:01.057 } 00:18:01.057 ] 00:18:01.057 }' 00:18:01.057 10:31:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.057 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 10:31:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.625 10:31:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:01.883 [2024-07-12 10:31:55.604583] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.883 10:31:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 00:18:01.883 10:31:55 -- bdev/bdev_raid.sh@380 -- # '[' -z ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 ']' 00:18:01.884 10:31:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.884 [2024-07-12 10:31:55.788435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.884 [2024-07-12 10:31:55.788574] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.884 [2024-07-12 10:31:55.788755] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.884 [2024-07-12 10:31:55.788959] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.884 [2024-07-12 10:31:55.789069] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.142 10:31:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.400 10:31:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.400 10:31:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.659 10:31:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.659 10:31:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:02.659 10:31:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.659 10:31:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:02.918 10:31:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:02.918 10:31:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:02.918 10:31:56 -- common/autotest_common.sh@640 -- # local es=0 00:18:02.918 10:31:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:02.918 10:31:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.918 10:31:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.918 10:31:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.918 10:31:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.918 10:31:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.918 10:31:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.918 10:31:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.918 10:31:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:02.918 10:31:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:03.177 [2024-07-12 10:31:56.892642] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:03.177 [2024-07-12 10:31:56.894296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:03.177 [2024-07-12 10:31:56.894476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:03.177 [2024-07-12 10:31:56.894572] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:03.177 [2024-07-12 10:31:56.894736] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:03.177 [2024-07-12 10:31:56.894805] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:03.177 [2024-07-12 10:31:56.894927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.177 [2024-07-12 10:31:56.894965] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:18:03.177 request: 00:18:03.177 { 00:18:03.177 "name": "raid_bdev1", 00:18:03.177 "raid_level": "raid1", 00:18:03.177 "base_bdevs": [ 00:18:03.177 "malloc1", 00:18:03.177 "malloc2", 00:18:03.177 "malloc3" 00:18:03.177 ], 00:18:03.177 "superblock": false, 00:18:03.177 "method": "bdev_raid_create", 00:18:03.177 "req_id": 1 00:18:03.177 } 00:18:03.177 Got JSON-RPC error response 00:18:03.177 response: 00:18:03.177 { 00:18:03.177 "code": -17, 00:18:03.177 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:03.177 } 00:18:03.177 10:31:56 -- common/autotest_common.sh@643 -- # es=1 00:18:03.177 10:31:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:03.177 10:31:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:03.177 10:31:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:03.177 10:31:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:03.177 10:31:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.436 [2024-07-12 10:31:57.316667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.436 [2024-07-12 10:31:57.316849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.436 [2024-07-12 10:31:57.316917] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:03.436 [2024-07-12 10:31:57.317022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.436 [2024-07-12 10:31:57.319162] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.436 [2024-07-12 10:31:57.319365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.436 [2024-07-12 10:31:57.319582] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:03.436 [2024-07-12 10:31:57.319723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.436 pt1 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.436 10:31:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.695 10:31:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.695 "name": "raid_bdev1", 00:18:03.695 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:03.695 "strip_size_kb": 0, 00:18:03.695 "state": "configuring", 00:18:03.695 "raid_level": "raid1", 00:18:03.695 "superblock": true, 00:18:03.695 "num_base_bdevs": 3, 00:18:03.695 "num_base_bdevs_discovered": 1, 00:18:03.695 "num_base_bdevs_operational": 3, 00:18:03.695 "base_bdevs_list": [ 00:18:03.695 { 00:18:03.695 "name": "pt1", 00:18:03.695 "uuid": "04fce7c6-c7ce-57a8-a0f0-50716869a212", 00:18:03.695 "is_configured": true, 00:18:03.696 "data_offset": 2048, 00:18:03.696 "data_size": 63488 00:18:03.696 }, 00:18:03.696 { 00:18:03.696 "name": null, 00:18:03.696 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:03.696 "is_configured": false, 00:18:03.696 "data_offset": 2048, 00:18:03.696 "data_size": 63488 00:18:03.696 }, 00:18:03.696 { 00:18:03.696 "name": null, 00:18:03.696 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:03.696 "is_configured": false, 00:18:03.696 "data_offset": 2048, 00:18:03.696 "data_size": 63488 00:18:03.696 } 00:18:03.696 ] 00:18:03.696 }' 00:18:03.696 10:31:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.696 10:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:04.271 10:31:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:04.272 10:31:58 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.529 [2024-07-12 10:31:58.336866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.529 [2024-07-12 10:31:58.337082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.529 [2024-07-12 10:31:58.337158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:04.529 [2024-07-12 10:31:58.337284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.529 [2024-07-12 10:31:58.337727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.529 [2024-07-12 10:31:58.337785] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.529 [2024-07-12 10:31:58.338011] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:04.529 [2024-07-12 10:31:58.338068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.529 pt2 00:18:04.529 10:31:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:04.788 [2024-07-12 10:31:58.512918] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.788 10:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.046 10:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.046 "name": "raid_bdev1", 00:18:05.046 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:05.046 "strip_size_kb": 0, 00:18:05.046 "state": "configuring", 00:18:05.046 "raid_level": "raid1", 00:18:05.046 "superblock": true, 00:18:05.046 "num_base_bdevs": 3, 00:18:05.046 "num_base_bdevs_discovered": 1, 00:18:05.046 "num_base_bdevs_operational": 3, 00:18:05.046 "base_bdevs_list": [ 00:18:05.046 { 00:18:05.046 "name": "pt1", 00:18:05.047 "uuid": "04fce7c6-c7ce-57a8-a0f0-50716869a212", 00:18:05.047 "is_configured": true, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 }, 00:18:05.047 { 00:18:05.047 "name": null, 00:18:05.047 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:05.047 "is_configured": false, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 }, 00:18:05.047 { 00:18:05.047 "name": null, 00:18:05.047 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:05.047 "is_configured": false, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 } 00:18:05.047 ] 00:18:05.047 }' 00:18:05.047 10:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.047 10:31:58 -- common/autotest_common.sh@10 -- # set +x 00:18:05.613 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:05.614 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.614 10:31:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.614 [2024-07-12 10:31:59.453073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.614 [2024-07-12 10:31:59.453298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.614 [2024-07-12 10:31:59.453375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:05.614 [2024-07-12 10:31:59.453498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.614 [2024-07-12 10:31:59.453950] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.614 [2024-07-12 10:31:59.454013] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.614 [2024-07-12 10:31:59.454238] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:05.614 [2024-07-12 10:31:59.454295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.614 pt2 00:18:05.614 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:05.614 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.614 10:31:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:05.871 [2024-07-12 10:31:59.641113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:05.871 [2024-07-12 10:31:59.641287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.871 [2024-07-12 10:31:59.641349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:05.871 [2024-07-12 10:31:59.641456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.871 [2024-07-12 10:31:59.641880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.871 [2024-07-12 10:31:59.641943] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:05.871 [2024-07-12 10:31:59.642148] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:05.871 [2024-07-12 10:31:59.642204] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:05.871 [2024-07-12 10:31:59.642410] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:05.871 [2024-07-12 10:31:59.642584] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:05.871 [2024-07-12 10:31:59.642711] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:05.871 [2024-07-12 10:31:59.643083] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:05.871 [2024-07-12 10:31:59.643227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:05.871 [2024-07-12 10:31:59.643460] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.871 pt3 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.871 10:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.128 10:31:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.128 "name": "raid_bdev1", 00:18:06.128 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:06.128 "strip_size_kb": 0, 00:18:06.128 "state": "online", 00:18:06.128 "raid_level": "raid1", 00:18:06.128 "superblock": true, 00:18:06.128 "num_base_bdevs": 3, 00:18:06.128 "num_base_bdevs_discovered": 3, 00:18:06.128 "num_base_bdevs_operational": 3, 00:18:06.128 "base_bdevs_list": [ 00:18:06.128 { 00:18:06.128 "name": "pt1", 00:18:06.128 "uuid": "04fce7c6-c7ce-57a8-a0f0-50716869a212", 00:18:06.128 "is_configured": true, 00:18:06.128 "data_offset": 2048, 00:18:06.128 "data_size": 63488 00:18:06.128 }, 00:18:06.128 { 00:18:06.128 "name": "pt2", 00:18:06.128 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:06.128 "is_configured": true, 00:18:06.128 "data_offset": 2048, 00:18:06.128 "data_size": 63488 00:18:06.128 }, 00:18:06.128 { 00:18:06.128 "name": "pt3", 00:18:06.128 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:06.128 "is_configured": true, 00:18:06.128 "data_offset": 2048, 00:18:06.128 "data_size": 63488 00:18:06.128 } 00:18:06.128 ] 00:18:06.128 }' 00:18:06.128 10:31:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.128 10:31:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.693 10:32:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.693 10:32:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:06.975 [2024-07-12 10:32:00.761492] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.975 10:32:00 -- bdev/bdev_raid.sh@430 -- # '[' ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 '!=' ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 ']' 00:18:06.975 10:32:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:06.975 10:32:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:06.975 10:32:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:06.975 10:32:00 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:07.234 [2024-07-12 10:32:00.945365] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.234 10:32:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.234 10:32:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.234 "name": "raid_bdev1", 00:18:07.234 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:07.234 "strip_size_kb": 0, 00:18:07.234 "state": "online", 00:18:07.234 "raid_level": "raid1", 00:18:07.234 "superblock": true, 00:18:07.234 "num_base_bdevs": 3, 00:18:07.234 "num_base_bdevs_discovered": 2, 00:18:07.234 "num_base_bdevs_operational": 2, 00:18:07.234 "base_bdevs_list": [ 00:18:07.234 { 00:18:07.234 "name": null, 00:18:07.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.234 "is_configured": false, 00:18:07.234 "data_offset": 2048, 00:18:07.234 "data_size": 63488 00:18:07.234 }, 00:18:07.234 { 00:18:07.234 "name": "pt2", 00:18:07.234 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:07.234 "is_configured": true, 00:18:07.234 "data_offset": 2048, 00:18:07.234 "data_size": 63488 00:18:07.234 }, 00:18:07.234 { 00:18:07.234 "name": "pt3", 00:18:07.234 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:07.234 "is_configured": true, 00:18:07.234 "data_offset": 2048, 00:18:07.234 "data_size": 63488 00:18:07.234 } 00:18:07.234 ] 00:18:07.234 }' 00:18:07.234 10:32:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.234 10:32:01 -- common/autotest_common.sh@10 -- # set +x 00:18:08.167 10:32:01 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.167 [2024-07-12 10:32:01.981504] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.167 [2024-07-12 10:32:01.981664] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.167 [2024-07-12 10:32:01.981802] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.168 [2024-07-12 10:32:01.981947] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.168 [2024-07-12 10:32:01.982044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:08.168 10:32:01 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.168 10:32:01 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:08.425 10:32:02 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:08.425 10:32:02 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:08.425 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:08.425 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:08.425 10:32:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:08.683 10:32:02 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.942 [2024-07-12 10:32:02.761620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.942 [2024-07-12 10:32:02.761805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.942 [2024-07-12 10:32:02.761872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:08.942 [2024-07-12 10:32:02.762003] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.942 [2024-07-12 10:32:02.763986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.942 [2024-07-12 10:32:02.764167] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.942 [2024-07-12 10:32:02.764368] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:08.942 [2024-07-12 10:32:02.764521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.942 pt2 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.942 10:32:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.201 10:32:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.201 "name": "raid_bdev1", 00:18:09.201 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:09.201 "strip_size_kb": 0, 00:18:09.201 "state": "configuring", 00:18:09.201 "raid_level": "raid1", 00:18:09.201 "superblock": true, 00:18:09.201 "num_base_bdevs": 3, 00:18:09.201 "num_base_bdevs_discovered": 1, 00:18:09.201 "num_base_bdevs_operational": 2, 00:18:09.201 "base_bdevs_list": [ 00:18:09.201 { 00:18:09.201 "name": null, 00:18:09.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.201 "is_configured": false, 00:18:09.201 "data_offset": 2048, 00:18:09.201 "data_size": 63488 00:18:09.201 }, 00:18:09.201 { 00:18:09.201 "name": "pt2", 00:18:09.201 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:09.201 "is_configured": true, 00:18:09.201 "data_offset": 2048, 00:18:09.201 "data_size": 63488 00:18:09.201 }, 00:18:09.201 { 00:18:09.201 "name": null, 00:18:09.201 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:09.201 "is_configured": false, 00:18:09.201 "data_offset": 2048, 00:18:09.201 "data_size": 63488 00:18:09.201 } 00:18:09.201 ] 00:18:09.201 }' 00:18:09.201 10:32:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.201 10:32:02 -- common/autotest_common.sh@10 -- # set +x 00:18:09.766 10:32:03 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:09.766 10:32:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:09.766 10:32:03 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:09.766 10:32:03 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.025 [2024-07-12 10:32:03.857817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.025 [2024-07-12 10:32:03.857995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.025 [2024-07-12 10:32:03.858061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:10.025 [2024-07-12 10:32:03.858173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.025 [2024-07-12 10:32:03.858641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.025 [2024-07-12 10:32:03.858772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.025 [2024-07-12 10:32:03.858897] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:10.025 [2024-07-12 10:32:03.858949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.025 [2024-07-12 10:32:03.859084] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:18:10.025 [2024-07-12 10:32:03.859119] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:10.025 [2024-07-12 10:32:03.859233] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:10.025 [2024-07-12 10:32:03.859768] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:18:10.025 [2024-07-12 10:32:03.859893] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:18:10.025 [2024-07-12 10:32:03.860117] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.025 pt3 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.025 10:32:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.283 10:32:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.283 "name": "raid_bdev1", 00:18:10.283 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:10.283 "strip_size_kb": 0, 00:18:10.283 "state": "online", 00:18:10.283 "raid_level": "raid1", 00:18:10.283 "superblock": true, 00:18:10.283 "num_base_bdevs": 3, 00:18:10.283 "num_base_bdevs_discovered": 2, 00:18:10.283 "num_base_bdevs_operational": 2, 00:18:10.283 "base_bdevs_list": [ 00:18:10.283 { 00:18:10.283 "name": null, 00:18:10.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.283 "is_configured": false, 00:18:10.283 "data_offset": 2048, 00:18:10.283 "data_size": 63488 00:18:10.283 }, 00:18:10.283 { 00:18:10.283 "name": "pt2", 00:18:10.283 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:10.283 "is_configured": true, 00:18:10.283 "data_offset": 2048, 00:18:10.283 "data_size": 63488 00:18:10.283 }, 00:18:10.283 { 00:18:10.283 "name": "pt3", 00:18:10.283 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:10.283 "is_configured": true, 00:18:10.283 "data_offset": 2048, 00:18:10.283 "data_size": 63488 00:18:10.283 } 00:18:10.283 ] 00:18:10.283 }' 00:18:10.283 10:32:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.283 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:18:10.850 10:32:04 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:10.850 10:32:04 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:11.108 [2024-07-12 10:32:04.841945] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.108 [2024-07-12 10:32:04.842092] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.108 [2024-07-12 10:32:04.842226] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.108 [2024-07-12 10:32:04.842307] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.109 [2024-07-12 10:32:04.842536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:18:11.109 10:32:04 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:11.109 10:32:04 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.368 10:32:05 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:11.368 10:32:05 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:11.368 10:32:05 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.626 [2024-07-12 10:32:05.326017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.626 [2024-07-12 10:32:05.326215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.626 [2024-07-12 10:32:05.326281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:11.627 [2024-07-12 10:32:05.326397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.627 [2024-07-12 10:32:05.328705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.627 [2024-07-12 10:32:05.328892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.627 [2024-07-12 10:32:05.329116] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:11.627 [2024-07-12 10:32:05.329267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.627 pt1 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.627 "name": "raid_bdev1", 00:18:11.627 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:11.627 "strip_size_kb": 0, 00:18:11.627 "state": "configuring", 00:18:11.627 "raid_level": "raid1", 00:18:11.627 "superblock": true, 00:18:11.627 "num_base_bdevs": 3, 00:18:11.627 "num_base_bdevs_discovered": 1, 00:18:11.627 "num_base_bdevs_operational": 3, 00:18:11.627 "base_bdevs_list": [ 00:18:11.627 { 00:18:11.627 "name": "pt1", 00:18:11.627 "uuid": "04fce7c6-c7ce-57a8-a0f0-50716869a212", 00:18:11.627 "is_configured": true, 00:18:11.627 "data_offset": 2048, 00:18:11.627 "data_size": 63488 00:18:11.627 }, 00:18:11.627 { 00:18:11.627 "name": null, 00:18:11.627 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:11.627 "is_configured": false, 00:18:11.627 "data_offset": 2048, 00:18:11.627 "data_size": 63488 00:18:11.627 }, 00:18:11.627 { 00:18:11.627 "name": null, 00:18:11.627 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:11.627 "is_configured": false, 00:18:11.627 "data_offset": 2048, 00:18:11.627 "data_size": 63488 00:18:11.627 } 00:18:11.627 ] 00:18:11.627 }' 00:18:11.627 10:32:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.627 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:12.563 10:32:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:12.821 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:12.821 10:32:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:12.821 10:32:06 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:12.821 10:32:06 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:13.079 [2024-07-12 10:32:06.778290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:13.079 [2024-07-12 10:32:06.778474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.079 [2024-07-12 10:32:06.778532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:13.079 [2024-07-12 10:32:06.778645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.079 [2024-07-12 10:32:06.779113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.079 [2024-07-12 10:32:06.779174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:13.079 [2024-07-12 10:32:06.779386] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:13.079 [2024-07-12 10:32:06.779531] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:13.079 [2024-07-12 10:32:06.779616] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.079 [2024-07-12 10:32:06.779670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:18:13.079 [2024-07-12 10:32:06.779816] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:13.079 pt3 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.080 "name": "raid_bdev1", 00:18:13.080 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:13.080 "strip_size_kb": 0, 00:18:13.080 "state": "configuring", 00:18:13.080 "raid_level": "raid1", 00:18:13.080 "superblock": true, 00:18:13.080 "num_base_bdevs": 3, 00:18:13.080 "num_base_bdevs_discovered": 1, 00:18:13.080 "num_base_bdevs_operational": 2, 00:18:13.080 "base_bdevs_list": [ 00:18:13.080 { 00:18:13.080 "name": null, 00:18:13.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.080 "is_configured": false, 00:18:13.080 "data_offset": 2048, 00:18:13.080 "data_size": 63488 00:18:13.080 }, 00:18:13.080 { 00:18:13.080 "name": null, 00:18:13.080 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:13.080 "is_configured": false, 00:18:13.080 "data_offset": 2048, 00:18:13.080 "data_size": 63488 00:18:13.080 }, 00:18:13.080 { 00:18:13.080 "name": "pt3", 00:18:13.080 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:13.080 "is_configured": true, 00:18:13.080 "data_offset": 2048, 00:18:13.080 "data_size": 63488 00:18:13.080 } 00:18:13.080 ] 00:18:13.080 }' 00:18:13.080 10:32:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.080 10:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:14.014 10:32:07 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:14.014 10:32:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:14.014 10:32:07 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.015 [2024-07-12 10:32:07.810485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.015 [2024-07-12 10:32:07.810682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.015 [2024-07-12 10:32:07.810748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:14.015 [2024-07-12 10:32:07.810871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.015 [2024-07-12 10:32:07.811306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.015 [2024-07-12 10:32:07.811390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.015 [2024-07-12 10:32:07.811622] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:14.015 [2024-07-12 10:32:07.811778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.015 [2024-07-12 10:32:07.812039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:18:14.015 [2024-07-12 10:32:07.812160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:14.015 [2024-07-12 10:32:07.812310] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:14.015 [2024-07-12 10:32:07.812766] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:18:14.015 [2024-07-12 10:32:07.812813] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:18:14.015 [2024-07-12 10:32:07.812984] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.015 pt2 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.015 10:32:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.273 10:32:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.273 "name": "raid_bdev1", 00:18:14.273 "uuid": "ba1a1351-f7c7-4b99-ace2-db80b9b0abc6", 00:18:14.273 "strip_size_kb": 0, 00:18:14.273 "state": "online", 00:18:14.273 "raid_level": "raid1", 00:18:14.273 "superblock": true, 00:18:14.273 "num_base_bdevs": 3, 00:18:14.273 "num_base_bdevs_discovered": 2, 00:18:14.273 "num_base_bdevs_operational": 2, 00:18:14.273 "base_bdevs_list": [ 00:18:14.273 { 00:18:14.273 "name": null, 00:18:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.273 "is_configured": false, 00:18:14.273 "data_offset": 2048, 00:18:14.273 "data_size": 63488 00:18:14.273 }, 00:18:14.273 { 00:18:14.274 "name": "pt2", 00:18:14.274 "uuid": "5b7bd6a7-39be-5825-b82e-0faea34cc077", 00:18:14.274 "is_configured": true, 00:18:14.274 "data_offset": 2048, 00:18:14.274 "data_size": 63488 00:18:14.274 }, 00:18:14.274 { 00:18:14.274 "name": "pt3", 00:18:14.274 "uuid": "b811dbc3-0165-506b-b73c-059001a76990", 00:18:14.274 "is_configured": true, 00:18:14.274 "data_offset": 2048, 00:18:14.274 "data_size": 63488 00:18:14.274 } 00:18:14.274 ] 00:18:14.274 }' 00:18:14.274 10:32:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.274 10:32:08 -- common/autotest_common.sh@10 -- # set +x 00:18:14.840 10:32:08 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:14.840 10:32:08 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:15.099 [2024-07-12 10:32:08.778790] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.099 10:32:08 -- bdev/bdev_raid.sh@506 -- # '[' ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 '!=' ba1a1351-f7c7-4b99-ace2-db80b9b0abc6 ']' 00:18:15.099 10:32:08 -- bdev/bdev_raid.sh@511 -- # killprocess 120937 00:18:15.099 10:32:08 -- common/autotest_common.sh@926 -- # '[' -z 120937 ']' 00:18:15.099 10:32:08 -- common/autotest_common.sh@930 -- # kill -0 120937 00:18:15.099 10:32:08 -- common/autotest_common.sh@931 -- # uname 00:18:15.099 10:32:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.099 10:32:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120937 00:18:15.099 killing process with pid 120937 00:18:15.099 10:32:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:15.099 10:32:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:15.099 10:32:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120937' 00:18:15.099 10:32:08 -- common/autotest_common.sh@945 -- # kill 120937 00:18:15.099 10:32:08 -- common/autotest_common.sh@950 -- # wait 120937 00:18:15.099 [2024-07-12 10:32:08.811595] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.099 [2024-07-12 10:32:08.811653] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.099 [2024-07-12 10:32:08.811749] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.099 [2024-07-12 10:32:08.811762] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:18:15.099 [2024-07-12 10:32:09.000475] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.035 ************************************ 00:18:16.035 END TEST raid_superblock_test 00:18:16.035 ************************************ 00:18:16.035 10:32:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:16.035 00:18:16.035 real 0m17.840s 00:18:16.035 user 0m32.895s 00:18:16.035 sys 0m1.999s 00:18:16.035 10:32:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.035 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:16.294 10:32:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:16.294 10:32:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:16.294 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:16.294 ************************************ 00:18:16.294 START TEST raid_state_function_test 00:18:16.294 ************************************ 00:18:16.294 10:32:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=121566 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121566' 00:18:16.294 Process raid pid: 121566 00:18:16.294 10:32:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121566 /var/tmp/spdk-raid.sock 00:18:16.294 10:32:09 -- common/autotest_common.sh@819 -- # '[' -z 121566 ']' 00:18:16.294 10:32:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:16.294 10:32:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.294 10:32:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:16.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:16.294 10:32:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.294 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:16.294 [2024-07-12 10:32:10.051944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:16.294 [2024-07-12 10:32:10.052374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.553 [2024-07-12 10:32:10.224039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.553 [2024-07-12 10:32:10.417129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.811 [2024-07-12 10:32:10.584850] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.195 10:32:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.195 10:32:10 -- common/autotest_common.sh@852 -- # return 0 00:18:17.195 10:32:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:17.195 [2024-07-12 10:32:11.083198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.195 [2024-07-12 10:32:11.083425] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.195 [2024-07-12 10:32:11.083542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.195 [2024-07-12 10:32:11.083601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.195 [2024-07-12 10:32:11.083685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.195 [2024-07-12 10:32:11.083756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.195 [2024-07-12 10:32:11.083786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:17.195 [2024-07-12 10:32:11.083827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.468 "name": "Existed_Raid", 00:18:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.468 "strip_size_kb": 64, 00:18:17.468 "state": "configuring", 00:18:17.468 "raid_level": "raid0", 00:18:17.468 "superblock": false, 00:18:17.468 "num_base_bdevs": 4, 00:18:17.468 "num_base_bdevs_discovered": 0, 00:18:17.468 "num_base_bdevs_operational": 4, 00:18:17.468 "base_bdevs_list": [ 00:18:17.468 { 00:18:17.468 "name": "BaseBdev1", 00:18:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.468 "is_configured": false, 00:18:17.468 "data_offset": 0, 00:18:17.468 "data_size": 0 00:18:17.468 }, 00:18:17.468 { 00:18:17.468 "name": "BaseBdev2", 00:18:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.468 "is_configured": false, 00:18:17.468 "data_offset": 0, 00:18:17.468 "data_size": 0 00:18:17.468 }, 00:18:17.468 { 00:18:17.468 "name": "BaseBdev3", 00:18:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.468 "is_configured": false, 00:18:17.468 "data_offset": 0, 00:18:17.468 "data_size": 0 00:18:17.468 }, 00:18:17.468 { 00:18:17.468 "name": "BaseBdev4", 00:18:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.468 "is_configured": false, 00:18:17.468 "data_offset": 0, 00:18:17.468 "data_size": 0 00:18:17.468 } 00:18:17.468 ] 00:18:17.468 }' 00:18:17.468 10:32:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.468 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:18:18.403 10:32:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.403 [2024-07-12 10:32:12.139233] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.403 [2024-07-12 10:32:12.139412] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:18.403 10:32:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:18.403 [2024-07-12 10:32:12.319299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.403 [2024-07-12 10:32:12.319489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.403 [2024-07-12 10:32:12.319585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.403 [2024-07-12 10:32:12.319649] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.403 [2024-07-12 10:32:12.319745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.403 [2024-07-12 10:32:12.319828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.403 [2024-07-12 10:32:12.319856] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.663 [2024-07-12 10:32:12.319965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.663 10:32:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.663 [2024-07-12 10:32:12.532907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.663 BaseBdev1 00:18:18.663 10:32:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:18.663 10:32:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:18.663 10:32:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:18.663 10:32:12 -- common/autotest_common.sh@889 -- # local i 00:18:18.663 10:32:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:18.663 10:32:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:18.663 10:32:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.921 10:32:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.179 [ 00:18:19.179 { 00:18:19.179 "name": "BaseBdev1", 00:18:19.179 "aliases": [ 00:18:19.179 "7e716263-d98c-44f0-a5c7-64df3b183b30" 00:18:19.179 ], 00:18:19.179 "product_name": "Malloc disk", 00:18:19.179 "block_size": 512, 00:18:19.179 "num_blocks": 65536, 00:18:19.179 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:19.179 "assigned_rate_limits": { 00:18:19.179 "rw_ios_per_sec": 0, 00:18:19.179 "rw_mbytes_per_sec": 0, 00:18:19.179 "r_mbytes_per_sec": 0, 00:18:19.179 "w_mbytes_per_sec": 0 00:18:19.179 }, 00:18:19.179 "claimed": true, 00:18:19.179 "claim_type": "exclusive_write", 00:18:19.179 "zoned": false, 00:18:19.179 "supported_io_types": { 00:18:19.179 "read": true, 00:18:19.179 "write": true, 00:18:19.179 "unmap": true, 00:18:19.179 "write_zeroes": true, 00:18:19.179 "flush": true, 00:18:19.179 "reset": true, 00:18:19.179 "compare": false, 00:18:19.179 "compare_and_write": false, 00:18:19.179 "abort": true, 00:18:19.179 "nvme_admin": false, 00:18:19.179 "nvme_io": false 00:18:19.179 }, 00:18:19.179 "memory_domains": [ 00:18:19.179 { 00:18:19.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.179 "dma_device_type": 2 00:18:19.179 } 00:18:19.179 ], 00:18:19.179 "driver_specific": {} 00:18:19.179 } 00:18:19.179 ] 00:18:19.179 10:32:12 -- common/autotest_common.sh@895 -- # return 0 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.179 10:32:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.436 10:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.436 "name": "Existed_Raid", 00:18:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.436 "strip_size_kb": 64, 00:18:19.436 "state": "configuring", 00:18:19.436 "raid_level": "raid0", 00:18:19.436 "superblock": false, 00:18:19.436 "num_base_bdevs": 4, 00:18:19.436 "num_base_bdevs_discovered": 1, 00:18:19.436 "num_base_bdevs_operational": 4, 00:18:19.436 "base_bdevs_list": [ 00:18:19.436 { 00:18:19.436 "name": "BaseBdev1", 00:18:19.436 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:19.436 "is_configured": true, 00:18:19.436 "data_offset": 0, 00:18:19.436 "data_size": 65536 00:18:19.436 }, 00:18:19.436 { 00:18:19.436 "name": "BaseBdev2", 00:18:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.436 "is_configured": false, 00:18:19.436 "data_offset": 0, 00:18:19.436 "data_size": 0 00:18:19.436 }, 00:18:19.436 { 00:18:19.436 "name": "BaseBdev3", 00:18:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.436 "is_configured": false, 00:18:19.436 "data_offset": 0, 00:18:19.436 "data_size": 0 00:18:19.436 }, 00:18:19.436 { 00:18:19.436 "name": "BaseBdev4", 00:18:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.436 "is_configured": false, 00:18:19.436 "data_offset": 0, 00:18:19.436 "data_size": 0 00:18:19.436 } 00:18:19.436 ] 00:18:19.436 }' 00:18:19.436 10:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.436 10:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:20.002 10:32:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.260 [2024-07-12 10:32:13.921161] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.260 [2024-07-12 10:32:13.921313] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:20.260 10:32:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:20.260 10:32:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.260 [2024-07-12 10:32:14.097239] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.260 [2024-07-12 10:32:14.099134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.260 [2024-07-12 10:32:14.099332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.260 [2024-07-12 10:32:14.099456] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.260 [2024-07-12 10:32:14.099515] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.260 [2024-07-12 10:32:14.099603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.260 [2024-07-12 10:32:14.099656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.260 10:32:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.261 10:32:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.261 10:32:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.261 10:32:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.261 10:32:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.261 10:32:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.519 10:32:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.519 "name": "Existed_Raid", 00:18:20.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.519 "strip_size_kb": 64, 00:18:20.519 "state": "configuring", 00:18:20.519 "raid_level": "raid0", 00:18:20.519 "superblock": false, 00:18:20.519 "num_base_bdevs": 4, 00:18:20.519 "num_base_bdevs_discovered": 1, 00:18:20.519 "num_base_bdevs_operational": 4, 00:18:20.519 "base_bdevs_list": [ 00:18:20.519 { 00:18:20.519 "name": "BaseBdev1", 00:18:20.519 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:20.519 "is_configured": true, 00:18:20.519 "data_offset": 0, 00:18:20.519 "data_size": 65536 00:18:20.519 }, 00:18:20.519 { 00:18:20.519 "name": "BaseBdev2", 00:18:20.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.519 "is_configured": false, 00:18:20.519 "data_offset": 0, 00:18:20.519 "data_size": 0 00:18:20.519 }, 00:18:20.519 { 00:18:20.519 "name": "BaseBdev3", 00:18:20.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.519 "is_configured": false, 00:18:20.519 "data_offset": 0, 00:18:20.519 "data_size": 0 00:18:20.519 }, 00:18:20.519 { 00:18:20.519 "name": "BaseBdev4", 00:18:20.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.519 "is_configured": false, 00:18:20.519 "data_offset": 0, 00:18:20.519 "data_size": 0 00:18:20.519 } 00:18:20.519 ] 00:18:20.519 }' 00:18:20.519 10:32:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.519 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:21.086 10:32:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.344 [2024-07-12 10:32:15.196461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.344 BaseBdev2 00:18:21.344 10:32:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:21.344 10:32:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:21.344 10:32:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:21.344 10:32:15 -- common/autotest_common.sh@889 -- # local i 00:18:21.344 10:32:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:21.344 10:32:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:21.344 10:32:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.603 10:32:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.861 [ 00:18:21.861 { 00:18:21.861 "name": "BaseBdev2", 00:18:21.861 "aliases": [ 00:18:21.861 "99e46f2a-b207-4a7a-82bb-f3439051fd47" 00:18:21.861 ], 00:18:21.861 "product_name": "Malloc disk", 00:18:21.861 "block_size": 512, 00:18:21.861 "num_blocks": 65536, 00:18:21.861 "uuid": "99e46f2a-b207-4a7a-82bb-f3439051fd47", 00:18:21.861 "assigned_rate_limits": { 00:18:21.861 "rw_ios_per_sec": 0, 00:18:21.861 "rw_mbytes_per_sec": 0, 00:18:21.861 "r_mbytes_per_sec": 0, 00:18:21.861 "w_mbytes_per_sec": 0 00:18:21.861 }, 00:18:21.861 "claimed": true, 00:18:21.861 "claim_type": "exclusive_write", 00:18:21.861 "zoned": false, 00:18:21.861 "supported_io_types": { 00:18:21.861 "read": true, 00:18:21.861 "write": true, 00:18:21.861 "unmap": true, 00:18:21.861 "write_zeroes": true, 00:18:21.861 "flush": true, 00:18:21.861 "reset": true, 00:18:21.861 "compare": false, 00:18:21.861 "compare_and_write": false, 00:18:21.861 "abort": true, 00:18:21.861 "nvme_admin": false, 00:18:21.861 "nvme_io": false 00:18:21.861 }, 00:18:21.861 "memory_domains": [ 00:18:21.861 { 00:18:21.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.861 "dma_device_type": 2 00:18:21.861 } 00:18:21.861 ], 00:18:21.861 "driver_specific": {} 00:18:21.861 } 00:18:21.861 ] 00:18:21.861 10:32:15 -- common/autotest_common.sh@895 -- # return 0 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.861 10:32:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.120 10:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.120 "name": "Existed_Raid", 00:18:22.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.120 "strip_size_kb": 64, 00:18:22.120 "state": "configuring", 00:18:22.120 "raid_level": "raid0", 00:18:22.120 "superblock": false, 00:18:22.120 "num_base_bdevs": 4, 00:18:22.120 "num_base_bdevs_discovered": 2, 00:18:22.120 "num_base_bdevs_operational": 4, 00:18:22.120 "base_bdevs_list": [ 00:18:22.120 { 00:18:22.120 "name": "BaseBdev1", 00:18:22.120 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:22.120 "is_configured": true, 00:18:22.120 "data_offset": 0, 00:18:22.120 "data_size": 65536 00:18:22.120 }, 00:18:22.120 { 00:18:22.120 "name": "BaseBdev2", 00:18:22.120 "uuid": "99e46f2a-b207-4a7a-82bb-f3439051fd47", 00:18:22.120 "is_configured": true, 00:18:22.120 "data_offset": 0, 00:18:22.120 "data_size": 65536 00:18:22.120 }, 00:18:22.120 { 00:18:22.120 "name": "BaseBdev3", 00:18:22.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.120 "is_configured": false, 00:18:22.120 "data_offset": 0, 00:18:22.120 "data_size": 0 00:18:22.120 }, 00:18:22.120 { 00:18:22.120 "name": "BaseBdev4", 00:18:22.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.120 "is_configured": false, 00:18:22.120 "data_offset": 0, 00:18:22.120 "data_size": 0 00:18:22.120 } 00:18:22.120 ] 00:18:22.120 }' 00:18:22.120 10:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.120 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:22.686 10:32:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.944 [2024-07-12 10:32:16.800474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.944 BaseBdev3 00:18:22.944 10:32:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:22.944 10:32:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:22.944 10:32:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:22.944 10:32:16 -- common/autotest_common.sh@889 -- # local i 00:18:22.944 10:32:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:22.944 10:32:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:22.944 10:32:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.203 10:32:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:23.461 [ 00:18:23.461 { 00:18:23.461 "name": "BaseBdev3", 00:18:23.461 "aliases": [ 00:18:23.461 "d5235fec-d588-44c1-8cde-9c9a3165fc02" 00:18:23.461 ], 00:18:23.461 "product_name": "Malloc disk", 00:18:23.461 "block_size": 512, 00:18:23.461 "num_blocks": 65536, 00:18:23.461 "uuid": "d5235fec-d588-44c1-8cde-9c9a3165fc02", 00:18:23.461 "assigned_rate_limits": { 00:18:23.461 "rw_ios_per_sec": 0, 00:18:23.462 "rw_mbytes_per_sec": 0, 00:18:23.462 "r_mbytes_per_sec": 0, 00:18:23.462 "w_mbytes_per_sec": 0 00:18:23.462 }, 00:18:23.462 "claimed": true, 00:18:23.462 "claim_type": "exclusive_write", 00:18:23.462 "zoned": false, 00:18:23.462 "supported_io_types": { 00:18:23.462 "read": true, 00:18:23.462 "write": true, 00:18:23.462 "unmap": true, 00:18:23.462 "write_zeroes": true, 00:18:23.462 "flush": true, 00:18:23.462 "reset": true, 00:18:23.462 "compare": false, 00:18:23.462 "compare_and_write": false, 00:18:23.462 "abort": true, 00:18:23.462 "nvme_admin": false, 00:18:23.462 "nvme_io": false 00:18:23.462 }, 00:18:23.462 "memory_domains": [ 00:18:23.462 { 00:18:23.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.462 "dma_device_type": 2 00:18:23.462 } 00:18:23.462 ], 00:18:23.462 "driver_specific": {} 00:18:23.462 } 00:18:23.462 ] 00:18:23.462 10:32:17 -- common/autotest_common.sh@895 -- # return 0 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.462 10:32:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.720 10:32:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.720 "name": "Existed_Raid", 00:18:23.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.720 "strip_size_kb": 64, 00:18:23.720 "state": "configuring", 00:18:23.720 "raid_level": "raid0", 00:18:23.720 "superblock": false, 00:18:23.720 "num_base_bdevs": 4, 00:18:23.720 "num_base_bdevs_discovered": 3, 00:18:23.720 "num_base_bdevs_operational": 4, 00:18:23.720 "base_bdevs_list": [ 00:18:23.720 { 00:18:23.720 "name": "BaseBdev1", 00:18:23.720 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:23.720 "is_configured": true, 00:18:23.720 "data_offset": 0, 00:18:23.720 "data_size": 65536 00:18:23.720 }, 00:18:23.720 { 00:18:23.720 "name": "BaseBdev2", 00:18:23.720 "uuid": "99e46f2a-b207-4a7a-82bb-f3439051fd47", 00:18:23.720 "is_configured": true, 00:18:23.720 "data_offset": 0, 00:18:23.720 "data_size": 65536 00:18:23.720 }, 00:18:23.720 { 00:18:23.720 "name": "BaseBdev3", 00:18:23.720 "uuid": "d5235fec-d588-44c1-8cde-9c9a3165fc02", 00:18:23.720 "is_configured": true, 00:18:23.720 "data_offset": 0, 00:18:23.720 "data_size": 65536 00:18:23.720 }, 00:18:23.720 { 00:18:23.720 "name": "BaseBdev4", 00:18:23.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.720 "is_configured": false, 00:18:23.720 "data_offset": 0, 00:18:23.720 "data_size": 0 00:18:23.720 } 00:18:23.720 ] 00:18:23.720 }' 00:18:23.720 10:32:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.720 10:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.654 10:32:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:24.654 [2024-07-12 10:32:18.476281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:24.654 [2024-07-12 10:32:18.476446] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:24.654 [2024-07-12 10:32:18.476486] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:24.654 [2024-07-12 10:32:18.476731] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:24.654 [2024-07-12 10:32:18.477174] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:24.654 [2024-07-12 10:32:18.477304] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:24.654 [2024-07-12 10:32:18.477612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.654 BaseBdev4 00:18:24.654 10:32:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:24.654 10:32:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:24.654 10:32:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:24.654 10:32:18 -- common/autotest_common.sh@889 -- # local i 00:18:24.654 10:32:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:24.654 10:32:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:24.654 10:32:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.912 10:32:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.170 [ 00:18:25.170 { 00:18:25.170 "name": "BaseBdev4", 00:18:25.170 "aliases": [ 00:18:25.170 "5712ae91-2916-4003-93f6-8cdc3ab18f56" 00:18:25.170 ], 00:18:25.170 "product_name": "Malloc disk", 00:18:25.170 "block_size": 512, 00:18:25.170 "num_blocks": 65536, 00:18:25.170 "uuid": "5712ae91-2916-4003-93f6-8cdc3ab18f56", 00:18:25.170 "assigned_rate_limits": { 00:18:25.170 "rw_ios_per_sec": 0, 00:18:25.170 "rw_mbytes_per_sec": 0, 00:18:25.170 "r_mbytes_per_sec": 0, 00:18:25.170 "w_mbytes_per_sec": 0 00:18:25.170 }, 00:18:25.170 "claimed": true, 00:18:25.170 "claim_type": "exclusive_write", 00:18:25.170 "zoned": false, 00:18:25.170 "supported_io_types": { 00:18:25.170 "read": true, 00:18:25.170 "write": true, 00:18:25.170 "unmap": true, 00:18:25.170 "write_zeroes": true, 00:18:25.170 "flush": true, 00:18:25.170 "reset": true, 00:18:25.171 "compare": false, 00:18:25.171 "compare_and_write": false, 00:18:25.171 "abort": true, 00:18:25.171 "nvme_admin": false, 00:18:25.171 "nvme_io": false 00:18:25.171 }, 00:18:25.171 "memory_domains": [ 00:18:25.171 { 00:18:25.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.171 "dma_device_type": 2 00:18:25.171 } 00:18:25.171 ], 00:18:25.171 "driver_specific": {} 00:18:25.171 } 00:18:25.171 ] 00:18:25.171 10:32:18 -- common/autotest_common.sh@895 -- # return 0 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.171 10:32:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.429 10:32:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.429 "name": "Existed_Raid", 00:18:25.429 "uuid": "e4f40796-4daa-4dac-8dfb-ad4d2317a50a", 00:18:25.429 "strip_size_kb": 64, 00:18:25.429 "state": "online", 00:18:25.429 "raid_level": "raid0", 00:18:25.429 "superblock": false, 00:18:25.429 "num_base_bdevs": 4, 00:18:25.429 "num_base_bdevs_discovered": 4, 00:18:25.429 "num_base_bdevs_operational": 4, 00:18:25.429 "base_bdevs_list": [ 00:18:25.429 { 00:18:25.429 "name": "BaseBdev1", 00:18:25.429 "uuid": "7e716263-d98c-44f0-a5c7-64df3b183b30", 00:18:25.429 "is_configured": true, 00:18:25.429 "data_offset": 0, 00:18:25.429 "data_size": 65536 00:18:25.429 }, 00:18:25.429 { 00:18:25.429 "name": "BaseBdev2", 00:18:25.429 "uuid": "99e46f2a-b207-4a7a-82bb-f3439051fd47", 00:18:25.429 "is_configured": true, 00:18:25.429 "data_offset": 0, 00:18:25.429 "data_size": 65536 00:18:25.429 }, 00:18:25.429 { 00:18:25.429 "name": "BaseBdev3", 00:18:25.429 "uuid": "d5235fec-d588-44c1-8cde-9c9a3165fc02", 00:18:25.429 "is_configured": true, 00:18:25.429 "data_offset": 0, 00:18:25.429 "data_size": 65536 00:18:25.429 }, 00:18:25.429 { 00:18:25.429 "name": "BaseBdev4", 00:18:25.429 "uuid": "5712ae91-2916-4003-93f6-8cdc3ab18f56", 00:18:25.429 "is_configured": true, 00:18:25.429 "data_offset": 0, 00:18:25.429 "data_size": 65536 00:18:25.429 } 00:18:25.429 ] 00:18:25.429 }' 00:18:25.429 10:32:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.429 10:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:25.994 10:32:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:26.252 [2024-07-12 10:32:20.036650] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.252 [2024-07-12 10:32:20.036801] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.252 [2024-07-12 10:32:20.036964] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.252 10:32:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.509 10:32:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.509 "name": "Existed_Raid", 00:18:26.509 "uuid": "e4f40796-4daa-4dac-8dfb-ad4d2317a50a", 00:18:26.509 "strip_size_kb": 64, 00:18:26.509 "state": "offline", 00:18:26.509 "raid_level": "raid0", 00:18:26.509 "superblock": false, 00:18:26.509 "num_base_bdevs": 4, 00:18:26.509 "num_base_bdevs_discovered": 3, 00:18:26.509 "num_base_bdevs_operational": 3, 00:18:26.509 "base_bdevs_list": [ 00:18:26.509 { 00:18:26.509 "name": null, 00:18:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.509 "is_configured": false, 00:18:26.509 "data_offset": 0, 00:18:26.509 "data_size": 65536 00:18:26.509 }, 00:18:26.509 { 00:18:26.509 "name": "BaseBdev2", 00:18:26.509 "uuid": "99e46f2a-b207-4a7a-82bb-f3439051fd47", 00:18:26.509 "is_configured": true, 00:18:26.509 "data_offset": 0, 00:18:26.509 "data_size": 65536 00:18:26.509 }, 00:18:26.509 { 00:18:26.509 "name": "BaseBdev3", 00:18:26.509 "uuid": "d5235fec-d588-44c1-8cde-9c9a3165fc02", 00:18:26.509 "is_configured": true, 00:18:26.509 "data_offset": 0, 00:18:26.509 "data_size": 65536 00:18:26.509 }, 00:18:26.509 { 00:18:26.509 "name": "BaseBdev4", 00:18:26.509 "uuid": "5712ae91-2916-4003-93f6-8cdc3ab18f56", 00:18:26.509 "is_configured": true, 00:18:26.509 "data_offset": 0, 00:18:26.509 "data_size": 65536 00:18:26.509 } 00:18:26.509 ] 00:18:26.509 }' 00:18:26.509 10:32:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.509 10:32:20 -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 10:32:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:27.442 10:32:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.442 10:32:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.442 10:32:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.442 10:32:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.442 10:32:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.442 10:32:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:27.700 [2024-07-12 10:32:21.400202] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.700 10:32:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:27.700 10:32:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.700 10:32:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.700 10:32:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.958 10:32:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.958 10:32:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.958 10:32:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:28.216 [2024-07-12 10:32:21.912059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:28.216 10:32:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:28.216 10:32:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:28.216 10:32:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.216 10:32:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:28.475 10:32:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:28.475 10:32:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.475 10:32:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:28.475 [2024-07-12 10:32:22.387430] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:28.475 [2024-07-12 10:32:22.387624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:28.734 10:32:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:28.734 10:32:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:28.734 10:32:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.734 10:32:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.993 10:32:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:28.993 10:32:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:28.993 10:32:22 -- bdev/bdev_raid.sh@287 -- # killprocess 121566 00:18:28.993 10:32:22 -- common/autotest_common.sh@926 -- # '[' -z 121566 ']' 00:18:28.993 10:32:22 -- common/autotest_common.sh@930 -- # kill -0 121566 00:18:28.993 10:32:22 -- common/autotest_common.sh@931 -- # uname 00:18:28.993 10:32:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:28.993 10:32:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121566 00:18:28.993 killing process with pid 121566 00:18:28.993 10:32:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:28.993 10:32:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:28.993 10:32:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121566' 00:18:28.993 10:32:22 -- common/autotest_common.sh@945 -- # kill 121566 00:18:28.993 10:32:22 -- common/autotest_common.sh@950 -- # wait 121566 00:18:28.993 [2024-07-12 10:32:22.710841] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.993 [2024-07-12 10:32:22.710945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.927 ************************************ 00:18:29.927 END TEST raid_state_function_test 00:18:29.927 ************************************ 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:29.927 00:18:29.927 real 0m13.649s 00:18:29.927 user 0m24.607s 00:18:29.927 sys 0m1.498s 00:18:29.927 10:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.927 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:29.927 10:32:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:29.927 10:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:29.927 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.927 ************************************ 00:18:29.927 START TEST raid_state_function_test_sb 00:18:29.927 ************************************ 00:18:29.927 10:32:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:29.927 Process raid pid: 122016 00:18:29.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=122016 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122016' 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122016 /var/tmp/spdk-raid.sock 00:18:29.927 10:32:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:29.927 10:32:23 -- common/autotest_common.sh@819 -- # '[' -z 122016 ']' 00:18:29.927 10:32:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:29.927 10:32:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:29.927 10:32:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:29.927 10:32:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:29.927 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.927 [2024-07-12 10:32:23.756166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:29.927 [2024-07-12 10:32:23.756569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.185 [2024-07-12 10:32:23.928972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.185 [2024-07-12 10:32:24.087332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.443 [2024-07-12 10:32:24.253427] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.009 10:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:31.009 10:32:24 -- common/autotest_common.sh@852 -- # return 0 00:18:31.009 10:32:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:31.009 [2024-07-12 10:32:24.896115] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.009 [2024-07-12 10:32:24.896396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.009 [2024-07-12 10:32:24.896508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.009 [2024-07-12 10:32:24.896567] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.009 [2024-07-12 10:32:24.896780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.009 [2024-07-12 10:32:24.896855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.009 [2024-07-12 10:32:24.896886] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:31.009 [2024-07-12 10:32:24.896925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:31.009 10:32:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:31.009 10:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.010 10:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.267 10:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.267 "name": "Existed_Raid", 00:18:31.267 "uuid": "3f18f06a-8f97-41c7-ac0f-2c662b0b6e3f", 00:18:31.267 "strip_size_kb": 64, 00:18:31.267 "state": "configuring", 00:18:31.267 "raid_level": "raid0", 00:18:31.267 "superblock": true, 00:18:31.267 "num_base_bdevs": 4, 00:18:31.267 "num_base_bdevs_discovered": 0, 00:18:31.267 "num_base_bdevs_operational": 4, 00:18:31.267 "base_bdevs_list": [ 00:18:31.267 { 00:18:31.267 "name": "BaseBdev1", 00:18:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.267 "is_configured": false, 00:18:31.267 "data_offset": 0, 00:18:31.267 "data_size": 0 00:18:31.267 }, 00:18:31.267 { 00:18:31.267 "name": "BaseBdev2", 00:18:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.267 "is_configured": false, 00:18:31.267 "data_offset": 0, 00:18:31.267 "data_size": 0 00:18:31.267 }, 00:18:31.267 { 00:18:31.267 "name": "BaseBdev3", 00:18:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.267 "is_configured": false, 00:18:31.267 "data_offset": 0, 00:18:31.267 "data_size": 0 00:18:31.267 }, 00:18:31.267 { 00:18:31.267 "name": "BaseBdev4", 00:18:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.267 "is_configured": false, 00:18:31.267 "data_offset": 0, 00:18:31.267 "data_size": 0 00:18:31.267 } 00:18:31.267 ] 00:18:31.267 }' 00:18:31.267 10:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.267 10:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 10:32:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:32.089 [2024-07-12 10:32:25.964202] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.089 [2024-07-12 10:32:25.964596] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:32.089 10:32:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:32.347 [2024-07-12 10:32:26.224300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.347 [2024-07-12 10:32:26.224595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.347 [2024-07-12 10:32:26.224714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.347 [2024-07-12 10:32:26.224799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.347 [2024-07-12 10:32:26.224901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.347 [2024-07-12 10:32:26.224974] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.347 [2024-07-12 10:32:26.225059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:32.347 [2024-07-12 10:32:26.225120] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:32.347 10:32:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:32.605 [2024-07-12 10:32:26.513561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.605 BaseBdev1 00:18:32.863 10:32:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:32.863 10:32:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:32.863 10:32:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:32.863 10:32:26 -- common/autotest_common.sh@889 -- # local i 00:18:32.863 10:32:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:32.863 10:32:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:32.863 10:32:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.863 10:32:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.121 [ 00:18:33.121 { 00:18:33.121 "name": "BaseBdev1", 00:18:33.121 "aliases": [ 00:18:33.121 "d407caa0-231a-4e1a-a858-ff0aee6bdf80" 00:18:33.121 ], 00:18:33.121 "product_name": "Malloc disk", 00:18:33.121 "block_size": 512, 00:18:33.121 "num_blocks": 65536, 00:18:33.121 "uuid": "d407caa0-231a-4e1a-a858-ff0aee6bdf80", 00:18:33.121 "assigned_rate_limits": { 00:18:33.121 "rw_ios_per_sec": 0, 00:18:33.121 "rw_mbytes_per_sec": 0, 00:18:33.121 "r_mbytes_per_sec": 0, 00:18:33.121 "w_mbytes_per_sec": 0 00:18:33.121 }, 00:18:33.121 "claimed": true, 00:18:33.121 "claim_type": "exclusive_write", 00:18:33.121 "zoned": false, 00:18:33.121 "supported_io_types": { 00:18:33.121 "read": true, 00:18:33.121 "write": true, 00:18:33.121 "unmap": true, 00:18:33.121 "write_zeroes": true, 00:18:33.121 "flush": true, 00:18:33.121 "reset": true, 00:18:33.121 "compare": false, 00:18:33.121 "compare_and_write": false, 00:18:33.121 "abort": true, 00:18:33.121 "nvme_admin": false, 00:18:33.121 "nvme_io": false 00:18:33.121 }, 00:18:33.121 "memory_domains": [ 00:18:33.121 { 00:18:33.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.121 "dma_device_type": 2 00:18:33.121 } 00:18:33.121 ], 00:18:33.121 "driver_specific": {} 00:18:33.121 } 00:18:33.121 ] 00:18:33.121 10:32:27 -- common/autotest_common.sh@895 -- # return 0 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.121 10:32:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.378 10:32:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.378 "name": "Existed_Raid", 00:18:33.378 "uuid": "a3aa88a5-3fd5-4b59-b195-dd908f082ea2", 00:18:33.378 "strip_size_kb": 64, 00:18:33.378 "state": "configuring", 00:18:33.378 "raid_level": "raid0", 00:18:33.378 "superblock": true, 00:18:33.378 "num_base_bdevs": 4, 00:18:33.378 "num_base_bdevs_discovered": 1, 00:18:33.378 "num_base_bdevs_operational": 4, 00:18:33.378 "base_bdevs_list": [ 00:18:33.378 { 00:18:33.378 "name": "BaseBdev1", 00:18:33.378 "uuid": "d407caa0-231a-4e1a-a858-ff0aee6bdf80", 00:18:33.378 "is_configured": true, 00:18:33.378 "data_offset": 2048, 00:18:33.378 "data_size": 63488 00:18:33.378 }, 00:18:33.378 { 00:18:33.378 "name": "BaseBdev2", 00:18:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.378 "is_configured": false, 00:18:33.378 "data_offset": 0, 00:18:33.378 "data_size": 0 00:18:33.378 }, 00:18:33.378 { 00:18:33.378 "name": "BaseBdev3", 00:18:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.378 "is_configured": false, 00:18:33.378 "data_offset": 0, 00:18:33.378 "data_size": 0 00:18:33.378 }, 00:18:33.378 { 00:18:33.378 "name": "BaseBdev4", 00:18:33.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.378 "is_configured": false, 00:18:33.378 "data_offset": 0, 00:18:33.378 "data_size": 0 00:18:33.378 } 00:18:33.378 ] 00:18:33.378 }' 00:18:33.378 10:32:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.378 10:32:27 -- common/autotest_common.sh@10 -- # set +x 00:18:34.312 10:32:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:34.312 [2024-07-12 10:32:28.117825] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.312 [2024-07-12 10:32:28.117991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:34.312 10:32:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:34.312 10:32:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:34.569 10:32:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:34.828 BaseBdev1 00:18:34.828 10:32:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:34.828 10:32:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:34.828 10:32:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:34.828 10:32:28 -- common/autotest_common.sh@889 -- # local i 00:18:34.828 10:32:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:34.828 10:32:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:34.828 10:32:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.086 10:32:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.345 [ 00:18:35.345 { 00:18:35.345 "name": "BaseBdev1", 00:18:35.345 "aliases": [ 00:18:35.345 "263bbef9-7fec-483a-a16d-1d0f2461fc03" 00:18:35.345 ], 00:18:35.345 "product_name": "Malloc disk", 00:18:35.345 "block_size": 512, 00:18:35.345 "num_blocks": 65536, 00:18:35.345 "uuid": "263bbef9-7fec-483a-a16d-1d0f2461fc03", 00:18:35.345 "assigned_rate_limits": { 00:18:35.345 "rw_ios_per_sec": 0, 00:18:35.345 "rw_mbytes_per_sec": 0, 00:18:35.345 "r_mbytes_per_sec": 0, 00:18:35.345 "w_mbytes_per_sec": 0 00:18:35.345 }, 00:18:35.345 "claimed": false, 00:18:35.345 "zoned": false, 00:18:35.345 "supported_io_types": { 00:18:35.345 "read": true, 00:18:35.345 "write": true, 00:18:35.345 "unmap": true, 00:18:35.345 "write_zeroes": true, 00:18:35.345 "flush": true, 00:18:35.345 "reset": true, 00:18:35.345 "compare": false, 00:18:35.345 "compare_and_write": false, 00:18:35.345 "abort": true, 00:18:35.345 "nvme_admin": false, 00:18:35.345 "nvme_io": false 00:18:35.345 }, 00:18:35.346 "memory_domains": [ 00:18:35.346 { 00:18:35.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.346 "dma_device_type": 2 00:18:35.346 } 00:18:35.346 ], 00:18:35.346 "driver_specific": {} 00:18:35.346 } 00:18:35.346 ] 00:18:35.346 10:32:29 -- common/autotest_common.sh@895 -- # return 0 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:35.346 [2024-07-12 10:32:29.231467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.346 [2024-07-12 10:32:29.233207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.346 [2024-07-12 10:32:29.233410] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.346 [2024-07-12 10:32:29.233517] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.346 [2024-07-12 10:32:29.233576] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.346 [2024-07-12 10:32:29.233685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.346 [2024-07-12 10:32:29.233738] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.346 10:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.604 10:32:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.604 "name": "Existed_Raid", 00:18:35.604 "uuid": "169bf555-5acd-4cd5-bea1-242b8bec8345", 00:18:35.604 "strip_size_kb": 64, 00:18:35.604 "state": "configuring", 00:18:35.604 "raid_level": "raid0", 00:18:35.604 "superblock": true, 00:18:35.604 "num_base_bdevs": 4, 00:18:35.604 "num_base_bdevs_discovered": 1, 00:18:35.604 "num_base_bdevs_operational": 4, 00:18:35.604 "base_bdevs_list": [ 00:18:35.604 { 00:18:35.604 "name": "BaseBdev1", 00:18:35.604 "uuid": "263bbef9-7fec-483a-a16d-1d0f2461fc03", 00:18:35.604 "is_configured": true, 00:18:35.604 "data_offset": 2048, 00:18:35.604 "data_size": 63488 00:18:35.604 }, 00:18:35.604 { 00:18:35.604 "name": "BaseBdev2", 00:18:35.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.604 "is_configured": false, 00:18:35.604 "data_offset": 0, 00:18:35.604 "data_size": 0 00:18:35.604 }, 00:18:35.604 { 00:18:35.604 "name": "BaseBdev3", 00:18:35.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.604 "is_configured": false, 00:18:35.604 "data_offset": 0, 00:18:35.604 "data_size": 0 00:18:35.604 }, 00:18:35.604 { 00:18:35.604 "name": "BaseBdev4", 00:18:35.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.604 "is_configured": false, 00:18:35.604 "data_offset": 0, 00:18:35.604 "data_size": 0 00:18:35.604 } 00:18:35.604 ] 00:18:35.604 }' 00:18:35.605 10:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.605 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:18:36.539 10:32:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.539 [2024-07-12 10:32:30.342489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.539 BaseBdev2 00:18:36.539 10:32:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:36.539 10:32:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:36.539 10:32:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:36.539 10:32:30 -- common/autotest_common.sh@889 -- # local i 00:18:36.539 10:32:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:36.539 10:32:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:36.539 10:32:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.797 10:32:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:37.056 [ 00:18:37.056 { 00:18:37.056 "name": "BaseBdev2", 00:18:37.056 "aliases": [ 00:18:37.056 "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1" 00:18:37.056 ], 00:18:37.056 "product_name": "Malloc disk", 00:18:37.056 "block_size": 512, 00:18:37.056 "num_blocks": 65536, 00:18:37.056 "uuid": "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1", 00:18:37.056 "assigned_rate_limits": { 00:18:37.056 "rw_ios_per_sec": 0, 00:18:37.056 "rw_mbytes_per_sec": 0, 00:18:37.056 "r_mbytes_per_sec": 0, 00:18:37.056 "w_mbytes_per_sec": 0 00:18:37.056 }, 00:18:37.056 "claimed": true, 00:18:37.056 "claim_type": "exclusive_write", 00:18:37.056 "zoned": false, 00:18:37.056 "supported_io_types": { 00:18:37.056 "read": true, 00:18:37.056 "write": true, 00:18:37.056 "unmap": true, 00:18:37.056 "write_zeroes": true, 00:18:37.056 "flush": true, 00:18:37.056 "reset": true, 00:18:37.056 "compare": false, 00:18:37.056 "compare_and_write": false, 00:18:37.056 "abort": true, 00:18:37.056 "nvme_admin": false, 00:18:37.056 "nvme_io": false 00:18:37.056 }, 00:18:37.056 "memory_domains": [ 00:18:37.056 { 00:18:37.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.056 "dma_device_type": 2 00:18:37.056 } 00:18:37.056 ], 00:18:37.056 "driver_specific": {} 00:18:37.056 } 00:18:37.056 ] 00:18:37.056 10:32:30 -- common/autotest_common.sh@895 -- # return 0 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.056 10:32:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.314 10:32:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.314 "name": "Existed_Raid", 00:18:37.314 "uuid": "169bf555-5acd-4cd5-bea1-242b8bec8345", 00:18:37.314 "strip_size_kb": 64, 00:18:37.314 "state": "configuring", 00:18:37.314 "raid_level": "raid0", 00:18:37.314 "superblock": true, 00:18:37.314 "num_base_bdevs": 4, 00:18:37.314 "num_base_bdevs_discovered": 2, 00:18:37.314 "num_base_bdevs_operational": 4, 00:18:37.314 "base_bdevs_list": [ 00:18:37.314 { 00:18:37.314 "name": "BaseBdev1", 00:18:37.314 "uuid": "263bbef9-7fec-483a-a16d-1d0f2461fc03", 00:18:37.314 "is_configured": true, 00:18:37.314 "data_offset": 2048, 00:18:37.314 "data_size": 63488 00:18:37.314 }, 00:18:37.314 { 00:18:37.314 "name": "BaseBdev2", 00:18:37.314 "uuid": "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1", 00:18:37.314 "is_configured": true, 00:18:37.314 "data_offset": 2048, 00:18:37.314 "data_size": 63488 00:18:37.314 }, 00:18:37.314 { 00:18:37.314 "name": "BaseBdev3", 00:18:37.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.314 "is_configured": false, 00:18:37.314 "data_offset": 0, 00:18:37.314 "data_size": 0 00:18:37.314 }, 00:18:37.314 { 00:18:37.314 "name": "BaseBdev4", 00:18:37.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.314 "is_configured": false, 00:18:37.314 "data_offset": 0, 00:18:37.314 "data_size": 0 00:18:37.314 } 00:18:37.314 ] 00:18:37.314 }' 00:18:37.314 10:32:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.314 10:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:37.880 10:32:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:38.138 [2024-07-12 10:32:31.887158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.138 BaseBdev3 00:18:38.138 10:32:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:38.138 10:32:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:38.138 10:32:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:38.138 10:32:31 -- common/autotest_common.sh@889 -- # local i 00:18:38.138 10:32:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:38.138 10:32:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:38.138 10:32:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.396 10:32:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:38.653 [ 00:18:38.653 { 00:18:38.653 "name": "BaseBdev3", 00:18:38.653 "aliases": [ 00:18:38.653 "0bb8874c-ce3f-4436-9e7f-d539f2a2c64b" 00:18:38.653 ], 00:18:38.653 "product_name": "Malloc disk", 00:18:38.653 "block_size": 512, 00:18:38.653 "num_blocks": 65536, 00:18:38.653 "uuid": "0bb8874c-ce3f-4436-9e7f-d539f2a2c64b", 00:18:38.653 "assigned_rate_limits": { 00:18:38.653 "rw_ios_per_sec": 0, 00:18:38.653 "rw_mbytes_per_sec": 0, 00:18:38.653 "r_mbytes_per_sec": 0, 00:18:38.653 "w_mbytes_per_sec": 0 00:18:38.653 }, 00:18:38.653 "claimed": true, 00:18:38.653 "claim_type": "exclusive_write", 00:18:38.653 "zoned": false, 00:18:38.653 "supported_io_types": { 00:18:38.653 "read": true, 00:18:38.653 "write": true, 00:18:38.653 "unmap": true, 00:18:38.653 "write_zeroes": true, 00:18:38.653 "flush": true, 00:18:38.653 "reset": true, 00:18:38.653 "compare": false, 00:18:38.653 "compare_and_write": false, 00:18:38.653 "abort": true, 00:18:38.653 "nvme_admin": false, 00:18:38.653 "nvme_io": false 00:18:38.653 }, 00:18:38.653 "memory_domains": [ 00:18:38.653 { 00:18:38.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.653 "dma_device_type": 2 00:18:38.653 } 00:18:38.653 ], 00:18:38.653 "driver_specific": {} 00:18:38.653 } 00:18:38.653 ] 00:18:38.653 10:32:32 -- common/autotest_common.sh@895 -- # return 0 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.653 "name": "Existed_Raid", 00:18:38.653 "uuid": "169bf555-5acd-4cd5-bea1-242b8bec8345", 00:18:38.653 "strip_size_kb": 64, 00:18:38.653 "state": "configuring", 00:18:38.653 "raid_level": "raid0", 00:18:38.653 "superblock": true, 00:18:38.653 "num_base_bdevs": 4, 00:18:38.653 "num_base_bdevs_discovered": 3, 00:18:38.653 "num_base_bdevs_operational": 4, 00:18:38.653 "base_bdevs_list": [ 00:18:38.653 { 00:18:38.653 "name": "BaseBdev1", 00:18:38.653 "uuid": "263bbef9-7fec-483a-a16d-1d0f2461fc03", 00:18:38.653 "is_configured": true, 00:18:38.653 "data_offset": 2048, 00:18:38.653 "data_size": 63488 00:18:38.653 }, 00:18:38.653 { 00:18:38.653 "name": "BaseBdev2", 00:18:38.653 "uuid": "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1", 00:18:38.653 "is_configured": true, 00:18:38.653 "data_offset": 2048, 00:18:38.653 "data_size": 63488 00:18:38.653 }, 00:18:38.653 { 00:18:38.653 "name": "BaseBdev3", 00:18:38.653 "uuid": "0bb8874c-ce3f-4436-9e7f-d539f2a2c64b", 00:18:38.653 "is_configured": true, 00:18:38.653 "data_offset": 2048, 00:18:38.653 "data_size": 63488 00:18:38.653 }, 00:18:38.653 { 00:18:38.653 "name": "BaseBdev4", 00:18:38.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.653 "is_configured": false, 00:18:38.653 "data_offset": 0, 00:18:38.653 "data_size": 0 00:18:38.653 } 00:18:38.653 ] 00:18:38.653 }' 00:18:38.653 10:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.654 10:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.584 10:32:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:39.584 [2024-07-12 10:32:33.414797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.584 [2024-07-12 10:32:33.415180] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:39.584 BaseBdev4 00:18:39.584 [2024-07-12 10:32:33.415297] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:39.584 [2024-07-12 10:32:33.415537] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:39.584 [2024-07-12 10:32:33.416038] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:39.584 [2024-07-12 10:32:33.416168] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:39.584 [2024-07-12 10:32:33.416417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.584 10:32:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:39.584 10:32:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:39.584 10:32:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:39.584 10:32:33 -- common/autotest_common.sh@889 -- # local i 00:18:39.584 10:32:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:39.584 10:32:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:39.584 10:32:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.842 10:32:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.099 [ 00:18:40.099 { 00:18:40.099 "name": "BaseBdev4", 00:18:40.099 "aliases": [ 00:18:40.099 "e4cd9d2e-93b5-4fc3-ae43-34472f7f871a" 00:18:40.099 ], 00:18:40.099 "product_name": "Malloc disk", 00:18:40.099 "block_size": 512, 00:18:40.099 "num_blocks": 65536, 00:18:40.099 "uuid": "e4cd9d2e-93b5-4fc3-ae43-34472f7f871a", 00:18:40.099 "assigned_rate_limits": { 00:18:40.099 "rw_ios_per_sec": 0, 00:18:40.099 "rw_mbytes_per_sec": 0, 00:18:40.099 "r_mbytes_per_sec": 0, 00:18:40.099 "w_mbytes_per_sec": 0 00:18:40.099 }, 00:18:40.099 "claimed": true, 00:18:40.099 "claim_type": "exclusive_write", 00:18:40.099 "zoned": false, 00:18:40.099 "supported_io_types": { 00:18:40.099 "read": true, 00:18:40.099 "write": true, 00:18:40.099 "unmap": true, 00:18:40.099 "write_zeroes": true, 00:18:40.099 "flush": true, 00:18:40.099 "reset": true, 00:18:40.099 "compare": false, 00:18:40.099 "compare_and_write": false, 00:18:40.099 "abort": true, 00:18:40.099 "nvme_admin": false, 00:18:40.099 "nvme_io": false 00:18:40.099 }, 00:18:40.099 "memory_domains": [ 00:18:40.099 { 00:18:40.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.099 "dma_device_type": 2 00:18:40.099 } 00:18:40.099 ], 00:18:40.099 "driver_specific": {} 00:18:40.099 } 00:18:40.099 ] 00:18:40.099 10:32:33 -- common/autotest_common.sh@895 -- # return 0 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.099 10:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.100 10:32:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.100 10:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.356 10:32:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.356 "name": "Existed_Raid", 00:18:40.356 "uuid": "169bf555-5acd-4cd5-bea1-242b8bec8345", 00:18:40.356 "strip_size_kb": 64, 00:18:40.356 "state": "online", 00:18:40.356 "raid_level": "raid0", 00:18:40.356 "superblock": true, 00:18:40.356 "num_base_bdevs": 4, 00:18:40.356 "num_base_bdevs_discovered": 4, 00:18:40.356 "num_base_bdevs_operational": 4, 00:18:40.356 "base_bdevs_list": [ 00:18:40.356 { 00:18:40.356 "name": "BaseBdev1", 00:18:40.356 "uuid": "263bbef9-7fec-483a-a16d-1d0f2461fc03", 00:18:40.356 "is_configured": true, 00:18:40.356 "data_offset": 2048, 00:18:40.356 "data_size": 63488 00:18:40.356 }, 00:18:40.356 { 00:18:40.356 "name": "BaseBdev2", 00:18:40.356 "uuid": "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1", 00:18:40.356 "is_configured": true, 00:18:40.356 "data_offset": 2048, 00:18:40.356 "data_size": 63488 00:18:40.356 }, 00:18:40.356 { 00:18:40.356 "name": "BaseBdev3", 00:18:40.356 "uuid": "0bb8874c-ce3f-4436-9e7f-d539f2a2c64b", 00:18:40.356 "is_configured": true, 00:18:40.356 "data_offset": 2048, 00:18:40.356 "data_size": 63488 00:18:40.356 }, 00:18:40.356 { 00:18:40.356 "name": "BaseBdev4", 00:18:40.356 "uuid": "e4cd9d2e-93b5-4fc3-ae43-34472f7f871a", 00:18:40.356 "is_configured": true, 00:18:40.356 "data_offset": 2048, 00:18:40.356 "data_size": 63488 00:18:40.356 } 00:18:40.356 ] 00:18:40.356 }' 00:18:40.356 10:32:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.356 10:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:40.920 10:32:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:41.177 [2024-07-12 10:32:34.891171] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.177 [2024-07-12 10:32:34.891319] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.177 [2024-07-12 10:32:34.891562] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.177 10:32:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.178 10:32:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.436 10:32:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.436 "name": "Existed_Raid", 00:18:41.436 "uuid": "169bf555-5acd-4cd5-bea1-242b8bec8345", 00:18:41.436 "strip_size_kb": 64, 00:18:41.436 "state": "offline", 00:18:41.436 "raid_level": "raid0", 00:18:41.436 "superblock": true, 00:18:41.436 "num_base_bdevs": 4, 00:18:41.436 "num_base_bdevs_discovered": 3, 00:18:41.436 "num_base_bdevs_operational": 3, 00:18:41.436 "base_bdevs_list": [ 00:18:41.436 { 00:18:41.436 "name": null, 00:18:41.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.436 "is_configured": false, 00:18:41.436 "data_offset": 2048, 00:18:41.436 "data_size": 63488 00:18:41.436 }, 00:18:41.436 { 00:18:41.436 "name": "BaseBdev2", 00:18:41.436 "uuid": "2f0ed6ee-0815-4ed2-94f8-7da92cdcf5c1", 00:18:41.436 "is_configured": true, 00:18:41.436 "data_offset": 2048, 00:18:41.436 "data_size": 63488 00:18:41.436 }, 00:18:41.436 { 00:18:41.436 "name": "BaseBdev3", 00:18:41.436 "uuid": "0bb8874c-ce3f-4436-9e7f-d539f2a2c64b", 00:18:41.436 "is_configured": true, 00:18:41.436 "data_offset": 2048, 00:18:41.436 "data_size": 63488 00:18:41.436 }, 00:18:41.436 { 00:18:41.436 "name": "BaseBdev4", 00:18:41.436 "uuid": "e4cd9d2e-93b5-4fc3-ae43-34472f7f871a", 00:18:41.436 "is_configured": true, 00:18:41.436 "data_offset": 2048, 00:18:41.436 "data_size": 63488 00:18:41.436 } 00:18:41.436 ] 00:18:41.436 }' 00:18:41.436 10:32:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.436 10:32:35 -- common/autotest_common.sh@10 -- # set +x 00:18:42.002 10:32:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:42.002 10:32:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.002 10:32:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.002 10:32:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.259 10:32:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.259 10:32:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.259 10:32:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.517 [2024-07-12 10:32:36.260120] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.517 10:32:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.517 10:32:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.517 10:32:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.517 10:32:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.774 10:32:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.774 10:32:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.774 10:32:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:43.032 [2024-07-12 10:32:36.760550] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.032 10:32:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.032 10:32:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.032 10:32:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.032 10:32:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:43.290 10:32:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:43.290 10:32:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.290 10:32:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:43.548 [2024-07-12 10:32:37.295925] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:43.548 [2024-07-12 10:32:37.296106] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:43.548 10:32:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.548 10:32:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.548 10:32:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.548 10:32:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.806 10:32:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:43.806 10:32:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:43.806 10:32:37 -- bdev/bdev_raid.sh@287 -- # killprocess 122016 00:18:43.806 10:32:37 -- common/autotest_common.sh@926 -- # '[' -z 122016 ']' 00:18:43.806 10:32:37 -- common/autotest_common.sh@930 -- # kill -0 122016 00:18:43.806 10:32:37 -- common/autotest_common.sh@931 -- # uname 00:18:43.806 10:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:43.806 10:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122016 00:18:43.806 killing process with pid 122016 00:18:43.806 10:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:43.806 10:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:43.806 10:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122016' 00:18:43.806 10:32:37 -- common/autotest_common.sh@945 -- # kill 122016 00:18:43.806 10:32:37 -- common/autotest_common.sh@950 -- # wait 122016 00:18:43.806 [2024-07-12 10:32:37.594425] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.806 [2024-07-12 10:32:37.594552] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.742 ************************************ 00:18:44.742 END TEST raid_state_function_test_sb 00:18:44.742 ************************************ 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:44.742 00:18:44.742 real 0m14.830s 00:18:44.742 user 0m26.660s 00:18:44.742 sys 0m1.760s 00:18:44.742 10:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.742 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:44.742 10:32:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:44.742 10:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.742 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:18:44.742 ************************************ 00:18:44.742 START TEST raid_superblock_test 00:18:44.742 ************************************ 00:18:44.742 10:32:38 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=122494 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122494 /var/tmp/spdk-raid.sock 00:18:44.742 10:32:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:44.742 10:32:38 -- common/autotest_common.sh@819 -- # '[' -z 122494 ']' 00:18:44.742 10:32:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.742 10:32:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.742 10:32:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.742 10:32:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.742 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:18:44.742 [2024-07-12 10:32:38.644426] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.742 [2024-07-12 10:32:38.645448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122494 ] 00:18:45.000 [2024-07-12 10:32:38.806911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.258 [2024-07-12 10:32:38.979093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.258 [2024-07-12 10:32:39.143557] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.827 10:32:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.827 10:32:39 -- common/autotest_common.sh@852 -- # return 0 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:45.827 10:32:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:46.095 malloc1 00:18:46.095 10:32:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.390 [2024-07-12 10:32:40.049775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.390 [2024-07-12 10:32:40.050027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.391 [2024-07-12 10:32:40.050189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:46.391 [2024-07-12 10:32:40.050327] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.391 [2024-07-12 10:32:40.052577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.391 [2024-07-12 10:32:40.052746] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.391 pt1 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:46.391 malloc2 00:18:46.391 10:32:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.653 [2024-07-12 10:32:40.510068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.653 [2024-07-12 10:32:40.510276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.653 [2024-07-12 10:32:40.510353] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:46.653 [2024-07-12 10:32:40.510499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.653 [2024-07-12 10:32:40.512687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.653 [2024-07-12 10:32:40.512854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.653 pt2 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:46.653 10:32:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:46.910 malloc3 00:18:46.910 10:32:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:47.168 [2024-07-12 10:32:40.951296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:47.168 [2024-07-12 10:32:40.951562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.168 [2024-07-12 10:32:40.951636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:47.168 [2024-07-12 10:32:40.951927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.168 [2024-07-12 10:32:40.954110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.168 [2024-07-12 10:32:40.954301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:47.168 pt3 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:47.168 10:32:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:47.426 malloc4 00:18:47.426 10:32:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:47.684 [2024-07-12 10:32:41.348932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:47.684 [2024-07-12 10:32:41.349157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.684 [2024-07-12 10:32:41.349303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:47.684 [2024-07-12 10:32:41.349442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.684 [2024-07-12 10:32:41.351705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.684 [2024-07-12 10:32:41.351897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:47.684 pt4 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:47.684 [2024-07-12 10:32:41.533048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:47.684 [2024-07-12 10:32:41.534943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.684 [2024-07-12 10:32:41.535118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:47.684 [2024-07-12 10:32:41.535232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:47.684 [2024-07-12 10:32:41.535502] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:47.684 [2024-07-12 10:32:41.535619] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:47.684 [2024-07-12 10:32:41.535832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:47.684 [2024-07-12 10:32:41.536304] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:47.684 [2024-07-12 10:32:41.536431] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:47.684 [2024-07-12 10:32:41.536667] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.684 10:32:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.685 10:32:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.685 10:32:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.685 10:32:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.943 10:32:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.943 "name": "raid_bdev1", 00:18:47.943 "uuid": "fdadc917-e74f-4708-8501-f8fa8e2d0f56", 00:18:47.943 "strip_size_kb": 64, 00:18:47.943 "state": "online", 00:18:47.943 "raid_level": "raid0", 00:18:47.943 "superblock": true, 00:18:47.943 "num_base_bdevs": 4, 00:18:47.943 "num_base_bdevs_discovered": 4, 00:18:47.943 "num_base_bdevs_operational": 4, 00:18:47.943 "base_bdevs_list": [ 00:18:47.943 { 00:18:47.943 "name": "pt1", 00:18:47.943 "uuid": "48ae0b53-01a4-51cd-a4fd-ccae10256f7e", 00:18:47.943 "is_configured": true, 00:18:47.943 "data_offset": 2048, 00:18:47.943 "data_size": 63488 00:18:47.943 }, 00:18:47.943 { 00:18:47.943 "name": "pt2", 00:18:47.943 "uuid": "a72bb14c-e69a-5809-b742-df4bcdbd8a60", 00:18:47.943 "is_configured": true, 00:18:47.943 "data_offset": 2048, 00:18:47.943 "data_size": 63488 00:18:47.943 }, 00:18:47.943 { 00:18:47.943 "name": "pt3", 00:18:47.943 "uuid": "105bff6f-eb4d-5e80-91e4-9cbcc0658ae7", 00:18:47.943 "is_configured": true, 00:18:47.943 "data_offset": 2048, 00:18:47.943 "data_size": 63488 00:18:47.943 }, 00:18:47.943 { 00:18:47.943 "name": "pt4", 00:18:47.943 "uuid": "d6c8ded7-21ab-5132-8f48-ad14db10ed14", 00:18:47.943 "is_configured": true, 00:18:47.943 "data_offset": 2048, 00:18:47.943 "data_size": 63488 00:18:47.943 } 00:18:47.943 ] 00:18:47.943 }' 00:18:47.943 10:32:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.943 10:32:41 -- common/autotest_common.sh@10 -- # set +x 00:18:48.877 10:32:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:48.877 10:32:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:48.877 [2024-07-12 10:32:42.605321] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.877 10:32:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fdadc917-e74f-4708-8501-f8fa8e2d0f56 00:18:48.877 10:32:42 -- bdev/bdev_raid.sh@380 -- # '[' -z fdadc917-e74f-4708-8501-f8fa8e2d0f56 ']' 00:18:48.877 10:32:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.134 [2024-07-12 10:32:42.841172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.135 [2024-07-12 10:32:42.841315] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.135 [2024-07-12 10:32:42.841480] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.135 [2024-07-12 10:32:42.841675] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.135 [2024-07-12 10:32:42.841787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:49.135 10:32:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.135 10:32:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:49.135 10:32:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:49.135 10:32:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:49.135 10:32:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.135 10:32:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:49.392 10:32:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.392 10:32:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:49.649 10:32:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.649 10:32:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:49.907 10:32:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.907 10:32:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:50.164 10:32:43 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:50.164 10:32:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:50.164 10:32:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:50.164 10:32:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:50.164 10:32:44 -- common/autotest_common.sh@640 -- # local es=0 00:18:50.164 10:32:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:50.164 10:32:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.164 10:32:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:50.164 10:32:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.164 10:32:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:50.164 10:32:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.164 10:32:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:50.164 10:32:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.164 10:32:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:50.164 10:32:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:50.422 [2024-07-12 10:32:44.221338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:50.422 [2024-07-12 10:32:44.223074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:50.422 [2024-07-12 10:32:44.223260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:50.422 [2024-07-12 10:32:44.223340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:50.422 [2024-07-12 10:32:44.223463] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:50.422 [2024-07-12 10:32:44.223657] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:50.422 [2024-07-12 10:32:44.223838] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:50.422 [2024-07-12 10:32:44.224009] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:50.423 [2024-07-12 10:32:44.224130] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.423 [2024-07-12 10:32:44.224222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:50.423 request: 00:18:50.423 { 00:18:50.423 "name": "raid_bdev1", 00:18:50.423 "raid_level": "raid0", 00:18:50.423 "base_bdevs": [ 00:18:50.423 "malloc1", 00:18:50.423 "malloc2", 00:18:50.423 "malloc3", 00:18:50.423 "malloc4" 00:18:50.423 ], 00:18:50.423 "superblock": false, 00:18:50.423 "strip_size_kb": 64, 00:18:50.423 "method": "bdev_raid_create", 00:18:50.423 "req_id": 1 00:18:50.423 } 00:18:50.423 Got JSON-RPC error response 00:18:50.423 response: 00:18:50.423 { 00:18:50.423 "code": -17, 00:18:50.423 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:50.423 } 00:18:50.423 10:32:44 -- common/autotest_common.sh@643 -- # es=1 00:18:50.423 10:32:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:50.423 10:32:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:50.423 10:32:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:50.423 10:32:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.423 10:32:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:50.700 [2024-07-12 10:32:44.575091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:50.700 [2024-07-12 10:32:44.575269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.700 [2024-07-12 10:32:44.575331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:50.700 [2024-07-12 10:32:44.575486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.700 [2024-07-12 10:32:44.577329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.700 [2024-07-12 10:32:44.577516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:50.700 [2024-07-12 10:32:44.577747] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:50.700 [2024-07-12 10:32:44.577892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.700 pt1 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.700 10:32:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.958 10:32:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.958 "name": "raid_bdev1", 00:18:50.958 "uuid": "fdadc917-e74f-4708-8501-f8fa8e2d0f56", 00:18:50.958 "strip_size_kb": 64, 00:18:50.958 "state": "configuring", 00:18:50.958 "raid_level": "raid0", 00:18:50.958 "superblock": true, 00:18:50.958 "num_base_bdevs": 4, 00:18:50.958 "num_base_bdevs_discovered": 1, 00:18:50.958 "num_base_bdevs_operational": 4, 00:18:50.958 "base_bdevs_list": [ 00:18:50.958 { 00:18:50.958 "name": "pt1", 00:18:50.958 "uuid": "48ae0b53-01a4-51cd-a4fd-ccae10256f7e", 00:18:50.958 "is_configured": true, 00:18:50.958 "data_offset": 2048, 00:18:50.958 "data_size": 63488 00:18:50.958 }, 00:18:50.958 { 00:18:50.958 "name": null, 00:18:50.958 "uuid": "a72bb14c-e69a-5809-b742-df4bcdbd8a60", 00:18:50.958 "is_configured": false, 00:18:50.958 "data_offset": 2048, 00:18:50.958 "data_size": 63488 00:18:50.958 }, 00:18:50.958 { 00:18:50.958 "name": null, 00:18:50.958 "uuid": "105bff6f-eb4d-5e80-91e4-9cbcc0658ae7", 00:18:50.958 "is_configured": false, 00:18:50.958 "data_offset": 2048, 00:18:50.958 "data_size": 63488 00:18:50.958 }, 00:18:50.958 { 00:18:50.958 "name": null, 00:18:50.958 "uuid": "d6c8ded7-21ab-5132-8f48-ad14db10ed14", 00:18:50.958 "is_configured": false, 00:18:50.958 "data_offset": 2048, 00:18:50.958 "data_size": 63488 00:18:50.958 } 00:18:50.958 ] 00:18:50.958 }' 00:18:50.958 10:32:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.958 10:32:44 -- common/autotest_common.sh@10 -- # set +x 00:18:51.887 10:32:45 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:51.887 10:32:45 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:51.887 [2024-07-12 10:32:45.699280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:51.887 [2024-07-12 10:32:45.699484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.887 [2024-07-12 10:32:45.699555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:51.887 [2024-07-12 10:32:45.699704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.887 [2024-07-12 10:32:45.700174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.887 [2024-07-12 10:32:45.700353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:51.887 [2024-07-12 10:32:45.700535] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:51.887 [2024-07-12 10:32:45.700654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:51.887 pt2 00:18:51.887 10:32:45 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:52.144 [2024-07-12 10:32:45.887321] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.144 10:32:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.454 10:32:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.454 "name": "raid_bdev1", 00:18:52.454 "uuid": "fdadc917-e74f-4708-8501-f8fa8e2d0f56", 00:18:52.454 "strip_size_kb": 64, 00:18:52.454 "state": "configuring", 00:18:52.454 "raid_level": "raid0", 00:18:52.454 "superblock": true, 00:18:52.454 "num_base_bdevs": 4, 00:18:52.454 "num_base_bdevs_discovered": 1, 00:18:52.454 "num_base_bdevs_operational": 4, 00:18:52.454 "base_bdevs_list": [ 00:18:52.454 { 00:18:52.454 "name": "pt1", 00:18:52.454 "uuid": "48ae0b53-01a4-51cd-a4fd-ccae10256f7e", 00:18:52.454 "is_configured": true, 00:18:52.454 "data_offset": 2048, 00:18:52.454 "data_size": 63488 00:18:52.454 }, 00:18:52.454 { 00:18:52.454 "name": null, 00:18:52.454 "uuid": "a72bb14c-e69a-5809-b742-df4bcdbd8a60", 00:18:52.454 "is_configured": false, 00:18:52.454 "data_offset": 2048, 00:18:52.454 "data_size": 63488 00:18:52.454 }, 00:18:52.454 { 00:18:52.454 "name": null, 00:18:52.454 "uuid": "105bff6f-eb4d-5e80-91e4-9cbcc0658ae7", 00:18:52.454 "is_configured": false, 00:18:52.454 "data_offset": 2048, 00:18:52.454 "data_size": 63488 00:18:52.454 }, 00:18:52.454 { 00:18:52.454 "name": null, 00:18:52.454 "uuid": "d6c8ded7-21ab-5132-8f48-ad14db10ed14", 00:18:52.454 "is_configured": false, 00:18:52.454 "data_offset": 2048, 00:18:52.454 "data_size": 63488 00:18:52.454 } 00:18:52.454 ] 00:18:52.454 }' 00:18:52.454 10:32:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.454 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:32:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:53.017 10:32:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:53.017 10:32:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.017 [2024-07-12 10:32:46.923543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.017 [2024-07-12 10:32:46.923781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.017 [2024-07-12 10:32:46.923849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:53.017 [2024-07-12 10:32:46.923961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.017 [2024-07-12 10:32:46.924481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.017 [2024-07-12 10:32:46.924679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.017 [2024-07-12 10:32:46.924852] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:53.017 [2024-07-12 10:32:46.924966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.017 pt2 00:18:53.275 10:32:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:53.275 10:32:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:53.275 10:32:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:53.275 [2024-07-12 10:32:47.163587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:53.275 [2024-07-12 10:32:47.163690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.275 [2024-07-12 10:32:47.163776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:53.275 [2024-07-12 10:32:47.163899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.275 [2024-07-12 10:32:47.164309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.275 [2024-07-12 10:32:47.164507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:53.275 [2024-07-12 10:32:47.164711] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:53.275 [2024-07-12 10:32:47.164836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:53.275 pt3 00:18:53.275 10:32:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:53.275 10:32:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:53.275 10:32:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:53.532 [2024-07-12 10:32:47.423626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:53.532 [2024-07-12 10:32:47.423822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.532 [2024-07-12 10:32:47.423889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:53.532 [2024-07-12 10:32:47.424012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.532 [2024-07-12 10:32:47.424414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.532 [2024-07-12 10:32:47.424606] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:53.532 [2024-07-12 10:32:47.424817] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:53.532 [2024-07-12 10:32:47.424943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:53.532 [2024-07-12 10:32:47.425099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:53.532 [2024-07-12 10:32:47.425197] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:53.532 [2024-07-12 10:32:47.425332] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:53.532 [2024-07-12 10:32:47.425672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:53.532 [2024-07-12 10:32:47.425784] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:53.532 [2024-07-12 10:32:47.425994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.532 pt4 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.532 10:32:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.790 10:32:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.790 "name": "raid_bdev1", 00:18:53.790 "uuid": "fdadc917-e74f-4708-8501-f8fa8e2d0f56", 00:18:53.790 "strip_size_kb": 64, 00:18:53.790 "state": "online", 00:18:53.790 "raid_level": "raid0", 00:18:53.790 "superblock": true, 00:18:53.790 "num_base_bdevs": 4, 00:18:53.790 "num_base_bdevs_discovered": 4, 00:18:53.790 "num_base_bdevs_operational": 4, 00:18:53.790 "base_bdevs_list": [ 00:18:53.790 { 00:18:53.790 "name": "pt1", 00:18:53.790 "uuid": "48ae0b53-01a4-51cd-a4fd-ccae10256f7e", 00:18:53.790 "is_configured": true, 00:18:53.790 "data_offset": 2048, 00:18:53.790 "data_size": 63488 00:18:53.790 }, 00:18:53.791 { 00:18:53.791 "name": "pt2", 00:18:53.791 "uuid": "a72bb14c-e69a-5809-b742-df4bcdbd8a60", 00:18:53.791 "is_configured": true, 00:18:53.791 "data_offset": 2048, 00:18:53.791 "data_size": 63488 00:18:53.791 }, 00:18:53.791 { 00:18:53.791 "name": "pt3", 00:18:53.791 "uuid": "105bff6f-eb4d-5e80-91e4-9cbcc0658ae7", 00:18:53.791 "is_configured": true, 00:18:53.791 "data_offset": 2048, 00:18:53.791 "data_size": 63488 00:18:53.791 }, 00:18:53.791 { 00:18:53.791 "name": "pt4", 00:18:53.791 "uuid": "d6c8ded7-21ab-5132-8f48-ad14db10ed14", 00:18:53.791 "is_configured": true, 00:18:53.791 "data_offset": 2048, 00:18:53.791 "data_size": 63488 00:18:53.791 } 00:18:53.791 ] 00:18:53.791 }' 00:18:53.791 10:32:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.791 10:32:47 -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:54.724 [2024-07-12 10:32:48.548033] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@430 -- # '[' fdadc917-e74f-4708-8501-f8fa8e2d0f56 '!=' fdadc917-e74f-4708-8501-f8fa8e2d0f56 ']' 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:54.724 10:32:48 -- bdev/bdev_raid.sh@511 -- # killprocess 122494 00:18:54.724 10:32:48 -- common/autotest_common.sh@926 -- # '[' -z 122494 ']' 00:18:54.724 10:32:48 -- common/autotest_common.sh@930 -- # kill -0 122494 00:18:54.724 10:32:48 -- common/autotest_common.sh@931 -- # uname 00:18:54.724 10:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.725 10:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122494 00:18:54.725 killing process with pid 122494 00:18:54.725 10:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:54.725 10:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:54.725 10:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122494' 00:18:54.725 10:32:48 -- common/autotest_common.sh@945 -- # kill 122494 00:18:54.725 10:32:48 -- common/autotest_common.sh@950 -- # wait 122494 00:18:54.725 [2024-07-12 10:32:48.583577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.725 [2024-07-12 10:32:48.583649] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.725 [2024-07-12 10:32:48.583722] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.725 [2024-07-12 10:32:48.583783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:54.982 [2024-07-12 10:32:48.835439] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.915 ************************************ 00:18:55.915 END TEST raid_superblock_test 00:18:55.915 ************************************ 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:55.915 00:18:55.915 real 0m11.176s 00:18:55.915 user 0m19.669s 00:18:55.915 sys 0m1.284s 00:18:55.915 10:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.915 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:55.915 10:32:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:55.915 10:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:55.915 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:18:55.915 ************************************ 00:18:55.915 START TEST raid_state_function_test 00:18:55.915 ************************************ 00:18:55.915 10:32:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=122832 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122832' 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:55.915 Process raid pid: 122832 00:18:55.915 10:32:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122832 /var/tmp/spdk-raid.sock 00:18:55.915 10:32:49 -- common/autotest_common.sh@819 -- # '[' -z 122832 ']' 00:18:55.915 10:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:55.915 10:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.915 10:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:55.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:55.916 10:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.916 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:18:56.173 [2024-07-12 10:32:49.889675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:56.173 [2024-07-12 10:32:49.890081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.173 [2024-07-12 10:32:50.058050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.431 [2024-07-12 10:32:50.218047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.689 [2024-07-12 10:32:50.385343] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.946 10:32:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.946 10:32:50 -- common/autotest_common.sh@852 -- # return 0 00:18:56.946 10:32:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:57.204 [2024-07-12 10:32:51.048958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.204 [2024-07-12 10:32:51.049197] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.204 [2024-07-12 10:32:51.049336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.204 [2024-07-12 10:32:51.049396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.204 [2024-07-12 10:32:51.049608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.204 [2024-07-12 10:32:51.049684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.204 [2024-07-12 10:32:51.049901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:57.204 [2024-07-12 10:32:51.049964] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.204 10:32:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.462 10:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:57.462 "name": "Existed_Raid", 00:18:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.462 "strip_size_kb": 64, 00:18:57.462 "state": "configuring", 00:18:57.462 "raid_level": "concat", 00:18:57.462 "superblock": false, 00:18:57.462 "num_base_bdevs": 4, 00:18:57.462 "num_base_bdevs_discovered": 0, 00:18:57.462 "num_base_bdevs_operational": 4, 00:18:57.462 "base_bdevs_list": [ 00:18:57.462 { 00:18:57.462 "name": "BaseBdev1", 00:18:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.462 "is_configured": false, 00:18:57.462 "data_offset": 0, 00:18:57.462 "data_size": 0 00:18:57.462 }, 00:18:57.462 { 00:18:57.462 "name": "BaseBdev2", 00:18:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.462 "is_configured": false, 00:18:57.462 "data_offset": 0, 00:18:57.462 "data_size": 0 00:18:57.462 }, 00:18:57.462 { 00:18:57.462 "name": "BaseBdev3", 00:18:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.462 "is_configured": false, 00:18:57.462 "data_offset": 0, 00:18:57.462 "data_size": 0 00:18:57.462 }, 00:18:57.462 { 00:18:57.462 "name": "BaseBdev4", 00:18:57.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.462 "is_configured": false, 00:18:57.462 "data_offset": 0, 00:18:57.462 "data_size": 0 00:18:57.462 } 00:18:57.462 ] 00:18:57.462 }' 00:18:57.462 10:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:57.462 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:58.029 10:32:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:58.287 [2024-07-12 10:32:52.056979] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.287 [2024-07-12 10:32:52.057129] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:58.287 10:32:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:58.545 [2024-07-12 10:32:52.297046] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.545 [2024-07-12 10:32:52.297215] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.545 [2024-07-12 10:32:52.297315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.545 [2024-07-12 10:32:52.297380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.545 [2024-07-12 10:32:52.297503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.545 [2024-07-12 10:32:52.297575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.545 [2024-07-12 10:32:52.297666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:58.545 [2024-07-12 10:32:52.297722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:58.545 10:32:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:58.803 [2024-07-12 10:32:52.582879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.803 BaseBdev1 00:18:58.803 10:32:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:58.803 10:32:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:58.803 10:32:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:58.803 10:32:52 -- common/autotest_common.sh@889 -- # local i 00:18:58.803 10:32:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:58.803 10:32:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:58.803 10:32:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.062 10:32:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.062 [ 00:18:59.062 { 00:18:59.062 "name": "BaseBdev1", 00:18:59.062 "aliases": [ 00:18:59.062 "28fb5152-2864-4ec8-a62f-61b277fd3bbb" 00:18:59.062 ], 00:18:59.062 "product_name": "Malloc disk", 00:18:59.062 "block_size": 512, 00:18:59.062 "num_blocks": 65536, 00:18:59.062 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:18:59.062 "assigned_rate_limits": { 00:18:59.062 "rw_ios_per_sec": 0, 00:18:59.062 "rw_mbytes_per_sec": 0, 00:18:59.062 "r_mbytes_per_sec": 0, 00:18:59.062 "w_mbytes_per_sec": 0 00:18:59.062 }, 00:18:59.062 "claimed": true, 00:18:59.062 "claim_type": "exclusive_write", 00:18:59.062 "zoned": false, 00:18:59.062 "supported_io_types": { 00:18:59.062 "read": true, 00:18:59.062 "write": true, 00:18:59.062 "unmap": true, 00:18:59.062 "write_zeroes": true, 00:18:59.062 "flush": true, 00:18:59.062 "reset": true, 00:18:59.062 "compare": false, 00:18:59.062 "compare_and_write": false, 00:18:59.062 "abort": true, 00:18:59.062 "nvme_admin": false, 00:18:59.062 "nvme_io": false 00:18:59.062 }, 00:18:59.062 "memory_domains": [ 00:18:59.062 { 00:18:59.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.062 "dma_device_type": 2 00:18:59.062 } 00:18:59.062 ], 00:18:59.062 "driver_specific": {} 00:18:59.062 } 00:18:59.062 ] 00:18:59.062 10:32:52 -- common/autotest_common.sh@895 -- # return 0 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.062 10:32:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.319 10:32:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.319 "name": "Existed_Raid", 00:18:59.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.319 "strip_size_kb": 64, 00:18:59.319 "state": "configuring", 00:18:59.319 "raid_level": "concat", 00:18:59.319 "superblock": false, 00:18:59.319 "num_base_bdevs": 4, 00:18:59.320 "num_base_bdevs_discovered": 1, 00:18:59.320 "num_base_bdevs_operational": 4, 00:18:59.320 "base_bdevs_list": [ 00:18:59.320 { 00:18:59.320 "name": "BaseBdev1", 00:18:59.320 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:18:59.320 "is_configured": true, 00:18:59.320 "data_offset": 0, 00:18:59.320 "data_size": 65536 00:18:59.320 }, 00:18:59.320 { 00:18:59.320 "name": "BaseBdev2", 00:18:59.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.320 "is_configured": false, 00:18:59.320 "data_offset": 0, 00:18:59.320 "data_size": 0 00:18:59.320 }, 00:18:59.320 { 00:18:59.320 "name": "BaseBdev3", 00:18:59.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.320 "is_configured": false, 00:18:59.320 "data_offset": 0, 00:18:59.320 "data_size": 0 00:18:59.320 }, 00:18:59.320 { 00:18:59.320 "name": "BaseBdev4", 00:18:59.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.320 "is_configured": false, 00:18:59.320 "data_offset": 0, 00:18:59.320 "data_size": 0 00:18:59.320 } 00:18:59.320 ] 00:18:59.320 }' 00:18:59.320 10:32:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.320 10:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:59.886 10:32:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:00.143 [2024-07-12 10:32:53.963119] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.143 [2024-07-12 10:32:53.963288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:00.143 10:32:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:00.143 10:32:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.401 [2024-07-12 10:32:54.175210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.401 [2024-07-12 10:32:54.176999] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.401 [2024-07-12 10:32:54.177200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.401 [2024-07-12 10:32:54.177303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.401 [2024-07-12 10:32:54.177360] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.401 [2024-07-12 10:32:54.177534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.401 [2024-07-12 10:32:54.177587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.401 10:32:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.659 10:32:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.659 "name": "Existed_Raid", 00:19:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.659 "strip_size_kb": 64, 00:19:00.659 "state": "configuring", 00:19:00.659 "raid_level": "concat", 00:19:00.659 "superblock": false, 00:19:00.659 "num_base_bdevs": 4, 00:19:00.659 "num_base_bdevs_discovered": 1, 00:19:00.659 "num_base_bdevs_operational": 4, 00:19:00.659 "base_bdevs_list": [ 00:19:00.659 { 00:19:00.659 "name": "BaseBdev1", 00:19:00.659 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:19:00.659 "is_configured": true, 00:19:00.659 "data_offset": 0, 00:19:00.659 "data_size": 65536 00:19:00.659 }, 00:19:00.659 { 00:19:00.659 "name": "BaseBdev2", 00:19:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.659 "is_configured": false, 00:19:00.659 "data_offset": 0, 00:19:00.659 "data_size": 0 00:19:00.659 }, 00:19:00.659 { 00:19:00.659 "name": "BaseBdev3", 00:19:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.659 "is_configured": false, 00:19:00.659 "data_offset": 0, 00:19:00.659 "data_size": 0 00:19:00.659 }, 00:19:00.659 { 00:19:00.659 "name": "BaseBdev4", 00:19:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.659 "is_configured": false, 00:19:00.659 "data_offset": 0, 00:19:00.659 "data_size": 0 00:19:00.659 } 00:19:00.659 ] 00:19:00.659 }' 00:19:00.659 10:32:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.659 10:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:01.225 10:32:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:01.483 [2024-07-12 10:32:55.320443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.483 BaseBdev2 00:19:01.483 10:32:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:01.483 10:32:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:01.483 10:32:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.483 10:32:55 -- common/autotest_common.sh@889 -- # local i 00:19:01.483 10:32:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.483 10:32:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.483 10:32:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.742 10:32:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:02.000 [ 00:19:02.000 { 00:19:02.000 "name": "BaseBdev2", 00:19:02.000 "aliases": [ 00:19:02.000 "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c" 00:19:02.000 ], 00:19:02.000 "product_name": "Malloc disk", 00:19:02.000 "block_size": 512, 00:19:02.000 "num_blocks": 65536, 00:19:02.000 "uuid": "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c", 00:19:02.000 "assigned_rate_limits": { 00:19:02.000 "rw_ios_per_sec": 0, 00:19:02.000 "rw_mbytes_per_sec": 0, 00:19:02.000 "r_mbytes_per_sec": 0, 00:19:02.000 "w_mbytes_per_sec": 0 00:19:02.000 }, 00:19:02.000 "claimed": true, 00:19:02.000 "claim_type": "exclusive_write", 00:19:02.000 "zoned": false, 00:19:02.000 "supported_io_types": { 00:19:02.000 "read": true, 00:19:02.000 "write": true, 00:19:02.000 "unmap": true, 00:19:02.000 "write_zeroes": true, 00:19:02.000 "flush": true, 00:19:02.000 "reset": true, 00:19:02.000 "compare": false, 00:19:02.000 "compare_and_write": false, 00:19:02.000 "abort": true, 00:19:02.000 "nvme_admin": false, 00:19:02.000 "nvme_io": false 00:19:02.000 }, 00:19:02.000 "memory_domains": [ 00:19:02.000 { 00:19:02.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.000 "dma_device_type": 2 00:19:02.000 } 00:19:02.000 ], 00:19:02.000 "driver_specific": {} 00:19:02.000 } 00:19:02.000 ] 00:19:02.000 10:32:55 -- common/autotest_common.sh@895 -- # return 0 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.000 10:32:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.258 10:32:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.258 "name": "Existed_Raid", 00:19:02.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.258 "strip_size_kb": 64, 00:19:02.258 "state": "configuring", 00:19:02.258 "raid_level": "concat", 00:19:02.258 "superblock": false, 00:19:02.258 "num_base_bdevs": 4, 00:19:02.258 "num_base_bdevs_discovered": 2, 00:19:02.258 "num_base_bdevs_operational": 4, 00:19:02.258 "base_bdevs_list": [ 00:19:02.258 { 00:19:02.258 "name": "BaseBdev1", 00:19:02.259 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:19:02.259 "is_configured": true, 00:19:02.259 "data_offset": 0, 00:19:02.259 "data_size": 65536 00:19:02.259 }, 00:19:02.259 { 00:19:02.259 "name": "BaseBdev2", 00:19:02.259 "uuid": "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c", 00:19:02.259 "is_configured": true, 00:19:02.259 "data_offset": 0, 00:19:02.259 "data_size": 65536 00:19:02.259 }, 00:19:02.259 { 00:19:02.259 "name": "BaseBdev3", 00:19:02.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.259 "is_configured": false, 00:19:02.259 "data_offset": 0, 00:19:02.259 "data_size": 0 00:19:02.259 }, 00:19:02.259 { 00:19:02.259 "name": "BaseBdev4", 00:19:02.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.259 "is_configured": false, 00:19:02.259 "data_offset": 0, 00:19:02.259 "data_size": 0 00:19:02.259 } 00:19:02.259 ] 00:19:02.259 }' 00:19:02.259 10:32:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.259 10:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:02.824 10:32:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.082 [2024-07-12 10:32:56.888616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.082 BaseBdev3 00:19:03.082 10:32:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:03.082 10:32:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:03.082 10:32:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:03.082 10:32:56 -- common/autotest_common.sh@889 -- # local i 00:19:03.082 10:32:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:03.082 10:32:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:03.082 10:32:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.340 10:32:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:03.597 [ 00:19:03.597 { 00:19:03.597 "name": "BaseBdev3", 00:19:03.597 "aliases": [ 00:19:03.597 "cfe18728-6744-438f-91a0-e42884d511b8" 00:19:03.597 ], 00:19:03.597 "product_name": "Malloc disk", 00:19:03.597 "block_size": 512, 00:19:03.597 "num_blocks": 65536, 00:19:03.597 "uuid": "cfe18728-6744-438f-91a0-e42884d511b8", 00:19:03.597 "assigned_rate_limits": { 00:19:03.597 "rw_ios_per_sec": 0, 00:19:03.597 "rw_mbytes_per_sec": 0, 00:19:03.597 "r_mbytes_per_sec": 0, 00:19:03.597 "w_mbytes_per_sec": 0 00:19:03.597 }, 00:19:03.597 "claimed": true, 00:19:03.597 "claim_type": "exclusive_write", 00:19:03.597 "zoned": false, 00:19:03.597 "supported_io_types": { 00:19:03.598 "read": true, 00:19:03.598 "write": true, 00:19:03.598 "unmap": true, 00:19:03.598 "write_zeroes": true, 00:19:03.598 "flush": true, 00:19:03.598 "reset": true, 00:19:03.598 "compare": false, 00:19:03.598 "compare_and_write": false, 00:19:03.598 "abort": true, 00:19:03.598 "nvme_admin": false, 00:19:03.598 "nvme_io": false 00:19:03.598 }, 00:19:03.598 "memory_domains": [ 00:19:03.598 { 00:19:03.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.598 "dma_device_type": 2 00:19:03.598 } 00:19:03.598 ], 00:19:03.598 "driver_specific": {} 00:19:03.598 } 00:19:03.598 ] 00:19:03.598 10:32:57 -- common/autotest_common.sh@895 -- # return 0 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.598 10:32:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.855 10:32:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.855 "name": "Existed_Raid", 00:19:03.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.855 "strip_size_kb": 64, 00:19:03.855 "state": "configuring", 00:19:03.855 "raid_level": "concat", 00:19:03.855 "superblock": false, 00:19:03.855 "num_base_bdevs": 4, 00:19:03.855 "num_base_bdevs_discovered": 3, 00:19:03.855 "num_base_bdevs_operational": 4, 00:19:03.855 "base_bdevs_list": [ 00:19:03.855 { 00:19:03.855 "name": "BaseBdev1", 00:19:03.855 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:19:03.855 "is_configured": true, 00:19:03.855 "data_offset": 0, 00:19:03.855 "data_size": 65536 00:19:03.855 }, 00:19:03.855 { 00:19:03.855 "name": "BaseBdev2", 00:19:03.855 "uuid": "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c", 00:19:03.855 "is_configured": true, 00:19:03.855 "data_offset": 0, 00:19:03.855 "data_size": 65536 00:19:03.855 }, 00:19:03.855 { 00:19:03.855 "name": "BaseBdev3", 00:19:03.855 "uuid": "cfe18728-6744-438f-91a0-e42884d511b8", 00:19:03.855 "is_configured": true, 00:19:03.855 "data_offset": 0, 00:19:03.855 "data_size": 65536 00:19:03.855 }, 00:19:03.855 { 00:19:03.855 "name": "BaseBdev4", 00:19:03.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.855 "is_configured": false, 00:19:03.855 "data_offset": 0, 00:19:03.855 "data_size": 0 00:19:03.855 } 00:19:03.855 ] 00:19:03.855 }' 00:19:03.855 10:32:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.855 10:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:04.787 10:32:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:04.787 [2024-07-12 10:32:58.584383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:04.787 [2024-07-12 10:32:58.584603] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:04.787 [2024-07-12 10:32:58.584642] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:04.787 [2024-07-12 10:32:58.584855] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:04.787 [2024-07-12 10:32:58.585319] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:04.787 [2024-07-12 10:32:58.585446] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:04.787 [2024-07-12 10:32:58.585794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.787 BaseBdev4 00:19:04.787 10:32:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:04.787 10:32:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:04.787 10:32:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:04.787 10:32:58 -- common/autotest_common.sh@889 -- # local i 00:19:04.787 10:32:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:04.787 10:32:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:04.787 10:32:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.046 10:32:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:05.304 [ 00:19:05.304 { 00:19:05.304 "name": "BaseBdev4", 00:19:05.304 "aliases": [ 00:19:05.304 "7462bb32-b702-4616-9fe1-5bbaf1f7e1c1" 00:19:05.304 ], 00:19:05.304 "product_name": "Malloc disk", 00:19:05.304 "block_size": 512, 00:19:05.304 "num_blocks": 65536, 00:19:05.304 "uuid": "7462bb32-b702-4616-9fe1-5bbaf1f7e1c1", 00:19:05.304 "assigned_rate_limits": { 00:19:05.304 "rw_ios_per_sec": 0, 00:19:05.304 "rw_mbytes_per_sec": 0, 00:19:05.304 "r_mbytes_per_sec": 0, 00:19:05.304 "w_mbytes_per_sec": 0 00:19:05.304 }, 00:19:05.304 "claimed": true, 00:19:05.304 "claim_type": "exclusive_write", 00:19:05.304 "zoned": false, 00:19:05.304 "supported_io_types": { 00:19:05.304 "read": true, 00:19:05.304 "write": true, 00:19:05.304 "unmap": true, 00:19:05.304 "write_zeroes": true, 00:19:05.304 "flush": true, 00:19:05.304 "reset": true, 00:19:05.304 "compare": false, 00:19:05.304 "compare_and_write": false, 00:19:05.304 "abort": true, 00:19:05.304 "nvme_admin": false, 00:19:05.304 "nvme_io": false 00:19:05.304 }, 00:19:05.304 "memory_domains": [ 00:19:05.304 { 00:19:05.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.304 "dma_device_type": 2 00:19:05.304 } 00:19:05.304 ], 00:19:05.304 "driver_specific": {} 00:19:05.304 } 00:19:05.304 ] 00:19:05.304 10:32:59 -- common/autotest_common.sh@895 -- # return 0 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.304 10:32:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.304 "name": "Existed_Raid", 00:19:05.304 "uuid": "4db169cd-0ac5-48df-8270-cffd4cf502d4", 00:19:05.304 "strip_size_kb": 64, 00:19:05.304 "state": "online", 00:19:05.304 "raid_level": "concat", 00:19:05.304 "superblock": false, 00:19:05.304 "num_base_bdevs": 4, 00:19:05.304 "num_base_bdevs_discovered": 4, 00:19:05.304 "num_base_bdevs_operational": 4, 00:19:05.304 "base_bdevs_list": [ 00:19:05.304 { 00:19:05.304 "name": "BaseBdev1", 00:19:05.304 "uuid": "28fb5152-2864-4ec8-a62f-61b277fd3bbb", 00:19:05.304 "is_configured": true, 00:19:05.304 "data_offset": 0, 00:19:05.304 "data_size": 65536 00:19:05.304 }, 00:19:05.304 { 00:19:05.304 "name": "BaseBdev2", 00:19:05.304 "uuid": "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c", 00:19:05.304 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev3", 00:19:05.305 "uuid": "cfe18728-6744-438f-91a0-e42884d511b8", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev4", 00:19:05.305 "uuid": "7462bb32-b702-4616-9fe1-5bbaf1f7e1c1", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 } 00:19:05.305 ] 00:19:05.305 }' 00:19:05.305 10:32:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.305 10:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:06.238 10:32:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:06.238 [2024-07-12 10:33:00.042662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.238 [2024-07-12 10:33:00.042826] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.238 [2024-07-12 10:33:00.043005] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.238 10:33:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.496 10:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.496 "name": "Existed_Raid", 00:19:06.496 "uuid": "4db169cd-0ac5-48df-8270-cffd4cf502d4", 00:19:06.496 "strip_size_kb": 64, 00:19:06.496 "state": "offline", 00:19:06.496 "raid_level": "concat", 00:19:06.496 "superblock": false, 00:19:06.496 "num_base_bdevs": 4, 00:19:06.496 "num_base_bdevs_discovered": 3, 00:19:06.496 "num_base_bdevs_operational": 3, 00:19:06.496 "base_bdevs_list": [ 00:19:06.496 { 00:19:06.496 "name": null, 00:19:06.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.496 "is_configured": false, 00:19:06.496 "data_offset": 0, 00:19:06.496 "data_size": 65536 00:19:06.496 }, 00:19:06.496 { 00:19:06.496 "name": "BaseBdev2", 00:19:06.496 "uuid": "ff7f407d-42ce-4ad4-8ca4-9ca7b4229e6c", 00:19:06.496 "is_configured": true, 00:19:06.496 "data_offset": 0, 00:19:06.496 "data_size": 65536 00:19:06.496 }, 00:19:06.496 { 00:19:06.496 "name": "BaseBdev3", 00:19:06.496 "uuid": "cfe18728-6744-438f-91a0-e42884d511b8", 00:19:06.496 "is_configured": true, 00:19:06.496 "data_offset": 0, 00:19:06.496 "data_size": 65536 00:19:06.496 }, 00:19:06.496 { 00:19:06.496 "name": "BaseBdev4", 00:19:06.496 "uuid": "7462bb32-b702-4616-9fe1-5bbaf1f7e1c1", 00:19:06.496 "is_configured": true, 00:19:06.496 "data_offset": 0, 00:19:06.496 "data_size": 65536 00:19:06.496 } 00:19:06.496 ] 00:19:06.496 }' 00:19:06.496 10:33:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.496 10:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:07.430 10:33:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:07.430 10:33:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.430 10:33:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.430 10:33:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.430 10:33:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.430 10:33:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.430 10:33:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:07.689 [2024-07-12 10:33:01.483494] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.689 10:33:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.689 10:33:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.689 10:33:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.689 10:33:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.947 10:33:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.947 10:33:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.947 10:33:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:08.206 [2024-07-12 10:33:01.974662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:08.206 10:33:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:08.206 10:33:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:08.206 10:33:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.206 10:33:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:08.463 10:33:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:08.463 10:33:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:08.463 10:33:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:08.720 [2024-07-12 10:33:02.458347] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:08.720 [2024-07-12 10:33:02.458551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:08.720 10:33:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:08.720 10:33:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:08.720 10:33:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.720 10:33:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:08.978 10:33:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:08.978 10:33:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:08.978 10:33:02 -- bdev/bdev_raid.sh@287 -- # killprocess 122832 00:19:08.978 10:33:02 -- common/autotest_common.sh@926 -- # '[' -z 122832 ']' 00:19:08.978 10:33:02 -- common/autotest_common.sh@930 -- # kill -0 122832 00:19:08.978 10:33:02 -- common/autotest_common.sh@931 -- # uname 00:19:08.978 10:33:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.978 10:33:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122832 00:19:08.978 killing process with pid 122832 00:19:08.978 10:33:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:08.978 10:33:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:08.978 10:33:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122832' 00:19:08.978 10:33:02 -- common/autotest_common.sh@945 -- # kill 122832 00:19:08.978 10:33:02 -- common/autotest_common.sh@950 -- # wait 122832 00:19:08.978 [2024-07-12 10:33:02.811057] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.978 [2024-07-12 10:33:02.811230] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.349 ************************************ 00:19:10.349 END TEST raid_state_function_test 00:19:10.349 ************************************ 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:10.349 00:19:10.349 real 0m14.026s 00:19:10.349 user 0m25.283s 00:19:10.349 sys 0m1.510s 00:19:10.349 10:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.349 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:10.349 10:33:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:10.349 10:33:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:10.349 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.349 ************************************ 00:19:10.349 START TEST raid_state_function_test_sb 00:19:10.349 ************************************ 00:19:10.349 10:33:03 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.349 Process raid pid: 123285 00:19:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=123285 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123285' 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123285 /var/tmp/spdk-raid.sock 00:19:10.349 10:33:03 -- common/autotest_common.sh@819 -- # '[' -z 123285 ']' 00:19:10.349 10:33:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:10.349 10:33:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:10.349 10:33:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:10.349 10:33:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:10.349 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.349 10:33:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:10.349 [2024-07-12 10:33:03.967867] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:10.349 [2024-07-12 10:33:03.968251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.349 [2024-07-12 10:33:04.130727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.607 [2024-07-12 10:33:04.288336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.607 [2024-07-12 10:33:04.456311] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.172 10:33:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:11.172 10:33:04 -- common/autotest_common.sh@852 -- # return 0 00:19:11.172 10:33:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.172 [2024-07-12 10:33:04.999503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.172 [2024-07-12 10:33:04.999693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.172 [2024-07-12 10:33:04.999819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.172 [2024-07-12 10:33:04.999878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.172 [2024-07-12 10:33:05.000105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.172 [2024-07-12 10:33:05.000182] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.172 [2024-07-12 10:33:05.000211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.172 [2024-07-12 10:33:05.000353] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.172 10:33:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.430 10:33:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.430 "name": "Existed_Raid", 00:19:11.430 "uuid": "1fcaba5a-4099-44d3-a8ec-d8b5ccff6421", 00:19:11.430 "strip_size_kb": 64, 00:19:11.430 "state": "configuring", 00:19:11.430 "raid_level": "concat", 00:19:11.430 "superblock": true, 00:19:11.430 "num_base_bdevs": 4, 00:19:11.430 "num_base_bdevs_discovered": 0, 00:19:11.430 "num_base_bdevs_operational": 4, 00:19:11.430 "base_bdevs_list": [ 00:19:11.430 { 00:19:11.430 "name": "BaseBdev1", 00:19:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.430 "is_configured": false, 00:19:11.430 "data_offset": 0, 00:19:11.430 "data_size": 0 00:19:11.430 }, 00:19:11.430 { 00:19:11.430 "name": "BaseBdev2", 00:19:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.430 "is_configured": false, 00:19:11.430 "data_offset": 0, 00:19:11.430 "data_size": 0 00:19:11.430 }, 00:19:11.430 { 00:19:11.430 "name": "BaseBdev3", 00:19:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.430 "is_configured": false, 00:19:11.430 "data_offset": 0, 00:19:11.430 "data_size": 0 00:19:11.430 }, 00:19:11.430 { 00:19:11.430 "name": "BaseBdev4", 00:19:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.430 "is_configured": false, 00:19:11.430 "data_offset": 0, 00:19:11.430 "data_size": 0 00:19:11.430 } 00:19:11.430 ] 00:19:11.430 }' 00:19:11.430 10:33:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.430 10:33:05 -- common/autotest_common.sh@10 -- # set +x 00:19:11.995 10:33:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.252 [2024-07-12 10:33:06.071517] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.252 [2024-07-12 10:33:06.071669] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:12.252 10:33:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:12.510 [2024-07-12 10:33:06.247609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.510 [2024-07-12 10:33:06.247775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.510 [2024-07-12 10:33:06.247872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.510 [2024-07-12 10:33:06.247936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.510 [2024-07-12 10:33:06.248055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.510 [2024-07-12 10:33:06.248127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.510 [2024-07-12 10:33:06.248154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.510 [2024-07-12 10:33:06.248296] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.510 10:33:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.768 [2024-07-12 10:33:06.444868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.768 BaseBdev1 00:19:12.768 10:33:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:12.768 10:33:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:12.768 10:33:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:12.768 10:33:06 -- common/autotest_common.sh@889 -- # local i 00:19:12.768 10:33:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:12.768 10:33:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:12.768 10:33:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.768 10:33:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.026 [ 00:19:13.026 { 00:19:13.026 "name": "BaseBdev1", 00:19:13.026 "aliases": [ 00:19:13.026 "9df3f93a-fad0-43b0-8399-a58008508496" 00:19:13.026 ], 00:19:13.026 "product_name": "Malloc disk", 00:19:13.026 "block_size": 512, 00:19:13.026 "num_blocks": 65536, 00:19:13.026 "uuid": "9df3f93a-fad0-43b0-8399-a58008508496", 00:19:13.026 "assigned_rate_limits": { 00:19:13.026 "rw_ios_per_sec": 0, 00:19:13.026 "rw_mbytes_per_sec": 0, 00:19:13.026 "r_mbytes_per_sec": 0, 00:19:13.026 "w_mbytes_per_sec": 0 00:19:13.026 }, 00:19:13.026 "claimed": true, 00:19:13.026 "claim_type": "exclusive_write", 00:19:13.026 "zoned": false, 00:19:13.026 "supported_io_types": { 00:19:13.026 "read": true, 00:19:13.026 "write": true, 00:19:13.026 "unmap": true, 00:19:13.026 "write_zeroes": true, 00:19:13.026 "flush": true, 00:19:13.026 "reset": true, 00:19:13.026 "compare": false, 00:19:13.026 "compare_and_write": false, 00:19:13.027 "abort": true, 00:19:13.027 "nvme_admin": false, 00:19:13.027 "nvme_io": false 00:19:13.027 }, 00:19:13.027 "memory_domains": [ 00:19:13.027 { 00:19:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.027 "dma_device_type": 2 00:19:13.027 } 00:19:13.027 ], 00:19:13.027 "driver_specific": {} 00:19:13.027 } 00:19:13.027 ] 00:19:13.027 10:33:06 -- common/autotest_common.sh@895 -- # return 0 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.027 10:33:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.285 10:33:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.285 "name": "Existed_Raid", 00:19:13.285 "uuid": "3180f9d4-b0e1-42bf-99ec-16bf6531252c", 00:19:13.285 "strip_size_kb": 64, 00:19:13.285 "state": "configuring", 00:19:13.285 "raid_level": "concat", 00:19:13.285 "superblock": true, 00:19:13.285 "num_base_bdevs": 4, 00:19:13.285 "num_base_bdevs_discovered": 1, 00:19:13.285 "num_base_bdevs_operational": 4, 00:19:13.285 "base_bdevs_list": [ 00:19:13.285 { 00:19:13.285 "name": "BaseBdev1", 00:19:13.285 "uuid": "9df3f93a-fad0-43b0-8399-a58008508496", 00:19:13.285 "is_configured": true, 00:19:13.285 "data_offset": 2048, 00:19:13.285 "data_size": 63488 00:19:13.285 }, 00:19:13.285 { 00:19:13.285 "name": "BaseBdev2", 00:19:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.285 "is_configured": false, 00:19:13.285 "data_offset": 0, 00:19:13.285 "data_size": 0 00:19:13.285 }, 00:19:13.285 { 00:19:13.285 "name": "BaseBdev3", 00:19:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.285 "is_configured": false, 00:19:13.285 "data_offset": 0, 00:19:13.285 "data_size": 0 00:19:13.285 }, 00:19:13.285 { 00:19:13.285 "name": "BaseBdev4", 00:19:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.285 "is_configured": false, 00:19:13.285 "data_offset": 0, 00:19:13.285 "data_size": 0 00:19:13.285 } 00:19:13.285 ] 00:19:13.285 }' 00:19:13.285 10:33:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.285 10:33:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.221 10:33:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.221 [2024-07-12 10:33:08.017147] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.221 [2024-07-12 10:33:08.017315] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:14.221 10:33:08 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:14.221 10:33:08 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:14.479 10:33:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.737 BaseBdev1 00:19:14.737 10:33:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:14.737 10:33:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:14.737 10:33:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.737 10:33:08 -- common/autotest_common.sh@889 -- # local i 00:19:14.737 10:33:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.737 10:33:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.737 10:33:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.994 10:33:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.252 [ 00:19:15.252 { 00:19:15.252 "name": "BaseBdev1", 00:19:15.252 "aliases": [ 00:19:15.252 "fb40574b-f18c-4ca5-85e2-a1e1617ee944" 00:19:15.252 ], 00:19:15.252 "product_name": "Malloc disk", 00:19:15.252 "block_size": 512, 00:19:15.252 "num_blocks": 65536, 00:19:15.252 "uuid": "fb40574b-f18c-4ca5-85e2-a1e1617ee944", 00:19:15.252 "assigned_rate_limits": { 00:19:15.252 "rw_ios_per_sec": 0, 00:19:15.252 "rw_mbytes_per_sec": 0, 00:19:15.252 "r_mbytes_per_sec": 0, 00:19:15.252 "w_mbytes_per_sec": 0 00:19:15.252 }, 00:19:15.252 "claimed": false, 00:19:15.252 "zoned": false, 00:19:15.252 "supported_io_types": { 00:19:15.252 "read": true, 00:19:15.252 "write": true, 00:19:15.252 "unmap": true, 00:19:15.252 "write_zeroes": true, 00:19:15.252 "flush": true, 00:19:15.252 "reset": true, 00:19:15.252 "compare": false, 00:19:15.252 "compare_and_write": false, 00:19:15.252 "abort": true, 00:19:15.252 "nvme_admin": false, 00:19:15.252 "nvme_io": false 00:19:15.252 }, 00:19:15.252 "memory_domains": [ 00:19:15.252 { 00:19:15.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.252 "dma_device_type": 2 00:19:15.252 } 00:19:15.252 ], 00:19:15.252 "driver_specific": {} 00:19:15.252 } 00:19:15.252 ] 00:19:15.252 10:33:08 -- common/autotest_common.sh@895 -- # return 0 00:19:15.252 10:33:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.514 [2024-07-12 10:33:09.181318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.514 [2024-07-12 10:33:09.182972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.514 [2024-07-12 10:33:09.183158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.514 [2024-07-12 10:33:09.183315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.514 [2024-07-12 10:33:09.183501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.514 [2024-07-12 10:33:09.183598] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:15.514 [2024-07-12 10:33:09.183652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:15.514 10:33:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:15.514 10:33:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.514 10:33:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.514 10:33:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.515 "name": "Existed_Raid", 00:19:15.515 "uuid": "a7abf37a-2214-440a-9201-c0279c94affa", 00:19:15.515 "strip_size_kb": 64, 00:19:15.515 "state": "configuring", 00:19:15.515 "raid_level": "concat", 00:19:15.515 "superblock": true, 00:19:15.515 "num_base_bdevs": 4, 00:19:15.515 "num_base_bdevs_discovered": 1, 00:19:15.515 "num_base_bdevs_operational": 4, 00:19:15.515 "base_bdevs_list": [ 00:19:15.515 { 00:19:15.515 "name": "BaseBdev1", 00:19:15.515 "uuid": "fb40574b-f18c-4ca5-85e2-a1e1617ee944", 00:19:15.515 "is_configured": true, 00:19:15.515 "data_offset": 2048, 00:19:15.515 "data_size": 63488 00:19:15.515 }, 00:19:15.515 { 00:19:15.515 "name": "BaseBdev2", 00:19:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.515 "is_configured": false, 00:19:15.515 "data_offset": 0, 00:19:15.515 "data_size": 0 00:19:15.515 }, 00:19:15.515 { 00:19:15.515 "name": "BaseBdev3", 00:19:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.515 "is_configured": false, 00:19:15.515 "data_offset": 0, 00:19:15.515 "data_size": 0 00:19:15.515 }, 00:19:15.515 { 00:19:15.515 "name": "BaseBdev4", 00:19:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.515 "is_configured": false, 00:19:15.515 "data_offset": 0, 00:19:15.515 "data_size": 0 00:19:15.515 } 00:19:15.515 ] 00:19:15.515 }' 00:19:15.515 10:33:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.515 10:33:09 -- common/autotest_common.sh@10 -- # set +x 00:19:16.517 10:33:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:16.517 [2024-07-12 10:33:10.264064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.517 BaseBdev2 00:19:16.517 10:33:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:16.517 10:33:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:16.517 10:33:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:16.517 10:33:10 -- common/autotest_common.sh@889 -- # local i 00:19:16.517 10:33:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:16.517 10:33:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:16.517 10:33:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.775 10:33:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:17.033 [ 00:19:17.033 { 00:19:17.033 "name": "BaseBdev2", 00:19:17.033 "aliases": [ 00:19:17.033 "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a" 00:19:17.033 ], 00:19:17.034 "product_name": "Malloc disk", 00:19:17.034 "block_size": 512, 00:19:17.034 "num_blocks": 65536, 00:19:17.034 "uuid": "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a", 00:19:17.034 "assigned_rate_limits": { 00:19:17.034 "rw_ios_per_sec": 0, 00:19:17.034 "rw_mbytes_per_sec": 0, 00:19:17.034 "r_mbytes_per_sec": 0, 00:19:17.034 "w_mbytes_per_sec": 0 00:19:17.034 }, 00:19:17.034 "claimed": true, 00:19:17.034 "claim_type": "exclusive_write", 00:19:17.034 "zoned": false, 00:19:17.034 "supported_io_types": { 00:19:17.034 "read": true, 00:19:17.034 "write": true, 00:19:17.034 "unmap": true, 00:19:17.034 "write_zeroes": true, 00:19:17.034 "flush": true, 00:19:17.034 "reset": true, 00:19:17.034 "compare": false, 00:19:17.034 "compare_and_write": false, 00:19:17.034 "abort": true, 00:19:17.034 "nvme_admin": false, 00:19:17.034 "nvme_io": false 00:19:17.034 }, 00:19:17.034 "memory_domains": [ 00:19:17.034 { 00:19:17.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.034 "dma_device_type": 2 00:19:17.034 } 00:19:17.034 ], 00:19:17.034 "driver_specific": {} 00:19:17.034 } 00:19:17.034 ] 00:19:17.034 10:33:10 -- common/autotest_common.sh@895 -- # return 0 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.034 "name": "Existed_Raid", 00:19:17.034 "uuid": "a7abf37a-2214-440a-9201-c0279c94affa", 00:19:17.034 "strip_size_kb": 64, 00:19:17.034 "state": "configuring", 00:19:17.034 "raid_level": "concat", 00:19:17.034 "superblock": true, 00:19:17.034 "num_base_bdevs": 4, 00:19:17.034 "num_base_bdevs_discovered": 2, 00:19:17.034 "num_base_bdevs_operational": 4, 00:19:17.034 "base_bdevs_list": [ 00:19:17.034 { 00:19:17.034 "name": "BaseBdev1", 00:19:17.034 "uuid": "fb40574b-f18c-4ca5-85e2-a1e1617ee944", 00:19:17.034 "is_configured": true, 00:19:17.034 "data_offset": 2048, 00:19:17.034 "data_size": 63488 00:19:17.034 }, 00:19:17.034 { 00:19:17.034 "name": "BaseBdev2", 00:19:17.034 "uuid": "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a", 00:19:17.034 "is_configured": true, 00:19:17.034 "data_offset": 2048, 00:19:17.034 "data_size": 63488 00:19:17.034 }, 00:19:17.034 { 00:19:17.034 "name": "BaseBdev3", 00:19:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.034 "is_configured": false, 00:19:17.034 "data_offset": 0, 00:19:17.034 "data_size": 0 00:19:17.034 }, 00:19:17.034 { 00:19:17.034 "name": "BaseBdev4", 00:19:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.034 "is_configured": false, 00:19:17.034 "data_offset": 0, 00:19:17.034 "data_size": 0 00:19:17.034 } 00:19:17.034 ] 00:19:17.034 }' 00:19:17.034 10:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.034 10:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:17.969 10:33:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:17.969 [2024-07-12 10:33:11.799728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.969 BaseBdev3 00:19:17.969 10:33:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:17.969 10:33:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:17.969 10:33:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.969 10:33:11 -- common/autotest_common.sh@889 -- # local i 00:19:17.969 10:33:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.969 10:33:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.969 10:33:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.226 10:33:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:18.484 [ 00:19:18.484 { 00:19:18.484 "name": "BaseBdev3", 00:19:18.484 "aliases": [ 00:19:18.484 "ab88def6-8b8e-4d99-bd22-cf56909c6067" 00:19:18.484 ], 00:19:18.484 "product_name": "Malloc disk", 00:19:18.484 "block_size": 512, 00:19:18.484 "num_blocks": 65536, 00:19:18.484 "uuid": "ab88def6-8b8e-4d99-bd22-cf56909c6067", 00:19:18.484 "assigned_rate_limits": { 00:19:18.484 "rw_ios_per_sec": 0, 00:19:18.484 "rw_mbytes_per_sec": 0, 00:19:18.484 "r_mbytes_per_sec": 0, 00:19:18.484 "w_mbytes_per_sec": 0 00:19:18.484 }, 00:19:18.484 "claimed": true, 00:19:18.484 "claim_type": "exclusive_write", 00:19:18.484 "zoned": false, 00:19:18.484 "supported_io_types": { 00:19:18.484 "read": true, 00:19:18.484 "write": true, 00:19:18.484 "unmap": true, 00:19:18.484 "write_zeroes": true, 00:19:18.484 "flush": true, 00:19:18.484 "reset": true, 00:19:18.484 "compare": false, 00:19:18.484 "compare_and_write": false, 00:19:18.484 "abort": true, 00:19:18.484 "nvme_admin": false, 00:19:18.484 "nvme_io": false 00:19:18.484 }, 00:19:18.484 "memory_domains": [ 00:19:18.484 { 00:19:18.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.484 "dma_device_type": 2 00:19:18.484 } 00:19:18.484 ], 00:19:18.484 "driver_specific": {} 00:19:18.484 } 00:19:18.484 ] 00:19:18.484 10:33:12 -- common/autotest_common.sh@895 -- # return 0 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.484 10:33:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.743 10:33:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.743 "name": "Existed_Raid", 00:19:18.743 "uuid": "a7abf37a-2214-440a-9201-c0279c94affa", 00:19:18.743 "strip_size_kb": 64, 00:19:18.743 "state": "configuring", 00:19:18.743 "raid_level": "concat", 00:19:18.743 "superblock": true, 00:19:18.743 "num_base_bdevs": 4, 00:19:18.743 "num_base_bdevs_discovered": 3, 00:19:18.743 "num_base_bdevs_operational": 4, 00:19:18.743 "base_bdevs_list": [ 00:19:18.743 { 00:19:18.743 "name": "BaseBdev1", 00:19:18.743 "uuid": "fb40574b-f18c-4ca5-85e2-a1e1617ee944", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev2", 00:19:18.743 "uuid": "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev3", 00:19:18.743 "uuid": "ab88def6-8b8e-4d99-bd22-cf56909c6067", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev4", 00:19:18.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.743 "is_configured": false, 00:19:18.743 "data_offset": 0, 00:19:18.743 "data_size": 0 00:19:18.743 } 00:19:18.743 ] 00:19:18.743 }' 00:19:18.743 10:33:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.743 10:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 10:33:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:19.568 [2024-07-12 10:33:13.427471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.568 [2024-07-12 10:33:13.427880] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:19.568 BaseBdev4 00:19:19.568 [2024-07-12 10:33:13.428027] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:19.568 [2024-07-12 10:33:13.428181] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:19.568 [2024-07-12 10:33:13.428575] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:19.568 [2024-07-12 10:33:13.428706] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:19.568 [2024-07-12 10:33:13.428976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.568 10:33:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:19.568 10:33:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:19.568 10:33:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:19.568 10:33:13 -- common/autotest_common.sh@889 -- # local i 00:19:19.568 10:33:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:19.568 10:33:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:19.569 10:33:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.825 10:33:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.083 [ 00:19:20.083 { 00:19:20.083 "name": "BaseBdev4", 00:19:20.083 "aliases": [ 00:19:20.083 "b291136b-5f67-4698-a679-8cfe76bc5367" 00:19:20.083 ], 00:19:20.083 "product_name": "Malloc disk", 00:19:20.083 "block_size": 512, 00:19:20.083 "num_blocks": 65536, 00:19:20.083 "uuid": "b291136b-5f67-4698-a679-8cfe76bc5367", 00:19:20.083 "assigned_rate_limits": { 00:19:20.083 "rw_ios_per_sec": 0, 00:19:20.083 "rw_mbytes_per_sec": 0, 00:19:20.083 "r_mbytes_per_sec": 0, 00:19:20.083 "w_mbytes_per_sec": 0 00:19:20.083 }, 00:19:20.083 "claimed": true, 00:19:20.083 "claim_type": "exclusive_write", 00:19:20.083 "zoned": false, 00:19:20.083 "supported_io_types": { 00:19:20.083 "read": true, 00:19:20.083 "write": true, 00:19:20.083 "unmap": true, 00:19:20.083 "write_zeroes": true, 00:19:20.083 "flush": true, 00:19:20.083 "reset": true, 00:19:20.083 "compare": false, 00:19:20.083 "compare_and_write": false, 00:19:20.083 "abort": true, 00:19:20.083 "nvme_admin": false, 00:19:20.083 "nvme_io": false 00:19:20.083 }, 00:19:20.083 "memory_domains": [ 00:19:20.083 { 00:19:20.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.083 "dma_device_type": 2 00:19:20.083 } 00:19:20.083 ], 00:19:20.083 "driver_specific": {} 00:19:20.083 } 00:19:20.083 ] 00:19:20.083 10:33:13 -- common/autotest_common.sh@895 -- # return 0 00:19:20.083 10:33:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.083 10:33:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.083 10:33:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:20.083 10:33:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.084 10:33:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.341 10:33:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.341 "name": "Existed_Raid", 00:19:20.341 "uuid": "a7abf37a-2214-440a-9201-c0279c94affa", 00:19:20.341 "strip_size_kb": 64, 00:19:20.341 "state": "online", 00:19:20.341 "raid_level": "concat", 00:19:20.341 "superblock": true, 00:19:20.341 "num_base_bdevs": 4, 00:19:20.341 "num_base_bdevs_discovered": 4, 00:19:20.341 "num_base_bdevs_operational": 4, 00:19:20.341 "base_bdevs_list": [ 00:19:20.341 { 00:19:20.341 "name": "BaseBdev1", 00:19:20.341 "uuid": "fb40574b-f18c-4ca5-85e2-a1e1617ee944", 00:19:20.341 "is_configured": true, 00:19:20.341 "data_offset": 2048, 00:19:20.341 "data_size": 63488 00:19:20.341 }, 00:19:20.341 { 00:19:20.341 "name": "BaseBdev2", 00:19:20.341 "uuid": "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a", 00:19:20.341 "is_configured": true, 00:19:20.341 "data_offset": 2048, 00:19:20.341 "data_size": 63488 00:19:20.341 }, 00:19:20.341 { 00:19:20.341 "name": "BaseBdev3", 00:19:20.341 "uuid": "ab88def6-8b8e-4d99-bd22-cf56909c6067", 00:19:20.341 "is_configured": true, 00:19:20.341 "data_offset": 2048, 00:19:20.341 "data_size": 63488 00:19:20.341 }, 00:19:20.341 { 00:19:20.341 "name": "BaseBdev4", 00:19:20.341 "uuid": "b291136b-5f67-4698-a679-8cfe76bc5367", 00:19:20.341 "is_configured": true, 00:19:20.341 "data_offset": 2048, 00:19:20.341 "data_size": 63488 00:19:20.341 } 00:19:20.341 ] 00:19:20.341 }' 00:19:20.341 10:33:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.341 10:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:20.905 10:33:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:21.164 [2024-07-12 10:33:14.871831] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.164 [2024-07-12 10:33:14.871963] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.164 [2024-07-12 10:33:14.872117] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.164 10:33:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.422 10:33:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.422 "name": "Existed_Raid", 00:19:21.422 "uuid": "a7abf37a-2214-440a-9201-c0279c94affa", 00:19:21.422 "strip_size_kb": 64, 00:19:21.422 "state": "offline", 00:19:21.422 "raid_level": "concat", 00:19:21.422 "superblock": true, 00:19:21.422 "num_base_bdevs": 4, 00:19:21.422 "num_base_bdevs_discovered": 3, 00:19:21.422 "num_base_bdevs_operational": 3, 00:19:21.422 "base_bdevs_list": [ 00:19:21.422 { 00:19:21.422 "name": null, 00:19:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.422 "is_configured": false, 00:19:21.422 "data_offset": 2048, 00:19:21.422 "data_size": 63488 00:19:21.422 }, 00:19:21.422 { 00:19:21.422 "name": "BaseBdev2", 00:19:21.422 "uuid": "8c67e4a6-d4b2-4310-af6e-7c1fbbf43f7a", 00:19:21.422 "is_configured": true, 00:19:21.422 "data_offset": 2048, 00:19:21.422 "data_size": 63488 00:19:21.422 }, 00:19:21.422 { 00:19:21.422 "name": "BaseBdev3", 00:19:21.422 "uuid": "ab88def6-8b8e-4d99-bd22-cf56909c6067", 00:19:21.422 "is_configured": true, 00:19:21.422 "data_offset": 2048, 00:19:21.422 "data_size": 63488 00:19:21.422 }, 00:19:21.422 { 00:19:21.422 "name": "BaseBdev4", 00:19:21.422 "uuid": "b291136b-5f67-4698-a679-8cfe76bc5367", 00:19:21.422 "is_configured": true, 00:19:21.422 "data_offset": 2048, 00:19:21.422 "data_size": 63488 00:19:21.422 } 00:19:21.422 ] 00:19:21.422 }' 00:19:21.422 10:33:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.422 10:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 10:33:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:21.989 10:33:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:21.989 10:33:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.989 10:33:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.246 10:33:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.246 10:33:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.246 10:33:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:22.503 [2024-07-12 10:33:16.228421] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.503 10:33:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:22.503 10:33:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.504 10:33:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.504 10:33:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.761 10:33:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.761 10:33:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.761 10:33:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:23.018 [2024-07-12 10:33:16.715741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.018 10:33:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.018 10:33:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.018 10:33:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.018 10:33:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.276 10:33:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.276 10:33:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.276 10:33:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:23.533 [2024-07-12 10:33:17.227179] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:23.534 [2024-07-12 10:33:17.227409] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:23.534 10:33:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.534 10:33:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.534 10:33:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:23.534 10:33:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.792 10:33:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:23.792 10:33:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:23.792 10:33:17 -- bdev/bdev_raid.sh@287 -- # killprocess 123285 00:19:23.792 10:33:17 -- common/autotest_common.sh@926 -- # '[' -z 123285 ']' 00:19:23.792 10:33:17 -- common/autotest_common.sh@930 -- # kill -0 123285 00:19:23.792 10:33:17 -- common/autotest_common.sh@931 -- # uname 00:19:23.792 10:33:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:23.792 10:33:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123285 00:19:23.792 10:33:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:23.792 killing process with pid 123285 00:19:23.792 10:33:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:23.792 10:33:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123285' 00:19:23.792 10:33:17 -- common/autotest_common.sh@945 -- # kill 123285 00:19:23.792 10:33:17 -- common/autotest_common.sh@950 -- # wait 123285 00:19:23.792 [2024-07-12 10:33:17.521379] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.792 [2024-07-12 10:33:17.521478] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.725 ************************************ 00:19:24.725 END TEST raid_state_function_test_sb 00:19:24.725 ************************************ 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:24.725 00:19:24.725 real 0m14.539s 00:19:24.725 user 0m26.275s 00:19:24.725 sys 0m1.528s 00:19:24.725 10:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.725 10:33:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:24.725 10:33:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:24.725 10:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:24.725 10:33:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.725 ************************************ 00:19:24.725 START TEST raid_superblock_test 00:19:24.725 ************************************ 00:19:24.725 10:33:18 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:24.725 10:33:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=123760 00:19:24.726 10:33:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123760 /var/tmp/spdk-raid.sock 00:19:24.726 10:33:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:24.726 10:33:18 -- common/autotest_common.sh@819 -- # '[' -z 123760 ']' 00:19:24.726 10:33:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:24.726 10:33:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:24.726 10:33:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:24.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:24.726 10:33:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:24.726 10:33:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.726 [2024-07-12 10:33:18.575183] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:24.726 [2024-07-12 10:33:18.575669] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123760 ] 00:19:24.984 [2024-07-12 10:33:18.746654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.243 [2024-07-12 10:33:18.968314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.243 [2024-07-12 10:33:19.133682] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.810 10:33:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:25.810 10:33:19 -- common/autotest_common.sh@852 -- # return 0 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:25.810 malloc1 00:19:25.810 10:33:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.070 [2024-07-12 10:33:19.914321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.070 [2024-07-12 10:33:19.914532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.070 [2024-07-12 10:33:19.914597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:26.070 [2024-07-12 10:33:19.914779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.070 [2024-07-12 10:33:19.916953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.070 [2024-07-12 10:33:19.917113] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.070 pt1 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.070 10:33:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:26.328 malloc2 00:19:26.328 10:33:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.587 [2024-07-12 10:33:20.317065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.587 [2024-07-12 10:33:20.317276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.587 [2024-07-12 10:33:20.317360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:26.587 [2024-07-12 10:33:20.317513] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.587 [2024-07-12 10:33:20.319782] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.587 [2024-07-12 10:33:20.319961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.587 pt2 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.587 10:33:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:26.846 malloc3 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.846 [2024-07-12 10:33:20.693884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.846 [2024-07-12 10:33:20.694078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.846 [2024-07-12 10:33:20.694150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:26.846 [2024-07-12 10:33:20.694288] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.846 [2024-07-12 10:33:20.696538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.846 [2024-07-12 10:33:20.696712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.846 pt3 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.846 10:33:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:27.105 malloc4 00:19:27.105 10:33:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:27.363 [2024-07-12 10:33:21.135250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:27.363 [2024-07-12 10:33:21.135493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.363 [2024-07-12 10:33:21.135567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:27.363 [2024-07-12 10:33:21.135839] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.363 [2024-07-12 10:33:21.137815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.363 [2024-07-12 10:33:21.137995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:27.363 pt4 00:19:27.363 10:33:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.363 10:33:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.363 10:33:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:27.622 [2024-07-12 10:33:21.323383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.622 [2024-07-12 10:33:21.325365] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.622 [2024-07-12 10:33:21.325554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:27.622 [2024-07-12 10:33:21.325744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:27.622 [2024-07-12 10:33:21.326042] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:27.622 [2024-07-12 10:33:21.326178] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:27.622 [2024-07-12 10:33:21.326330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:27.622 [2024-07-12 10:33:21.326728] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:27.622 [2024-07-12 10:33:21.326841] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:27.622 [2024-07-12 10:33:21.327077] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.622 10:33:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.881 10:33:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.881 "name": "raid_bdev1", 00:19:27.881 "uuid": "55a20086-6560-4e8b-8557-4b8feb8c3dd2", 00:19:27.881 "strip_size_kb": 64, 00:19:27.881 "state": "online", 00:19:27.881 "raid_level": "concat", 00:19:27.881 "superblock": true, 00:19:27.881 "num_base_bdevs": 4, 00:19:27.881 "num_base_bdevs_discovered": 4, 00:19:27.881 "num_base_bdevs_operational": 4, 00:19:27.881 "base_bdevs_list": [ 00:19:27.881 { 00:19:27.881 "name": "pt1", 00:19:27.881 "uuid": "63a227ef-8a17-5819-b54b-42509b67f22e", 00:19:27.881 "is_configured": true, 00:19:27.881 "data_offset": 2048, 00:19:27.881 "data_size": 63488 00:19:27.881 }, 00:19:27.881 { 00:19:27.881 "name": "pt2", 00:19:27.881 "uuid": "15c9ae97-eb13-55f3-bffc-12a0898a4d87", 00:19:27.881 "is_configured": true, 00:19:27.881 "data_offset": 2048, 00:19:27.881 "data_size": 63488 00:19:27.881 }, 00:19:27.881 { 00:19:27.881 "name": "pt3", 00:19:27.881 "uuid": "da5c4504-5a43-5900-abb3-e33535fedf05", 00:19:27.881 "is_configured": true, 00:19:27.881 "data_offset": 2048, 00:19:27.881 "data_size": 63488 00:19:27.881 }, 00:19:27.881 { 00:19:27.881 "name": "pt4", 00:19:27.881 "uuid": "2ead8449-33d7-5d92-8343-5d9e8a05b4a2", 00:19:27.881 "is_configured": true, 00:19:27.881 "data_offset": 2048, 00:19:27.881 "data_size": 63488 00:19:27.881 } 00:19:27.881 ] 00:19:27.881 }' 00:19:27.881 10:33:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.881 10:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:28.448 10:33:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:28.448 10:33:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:28.706 [2024-07-12 10:33:22.411882] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.706 10:33:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=55a20086-6560-4e8b-8557-4b8feb8c3dd2 00:19:28.706 10:33:22 -- bdev/bdev_raid.sh@380 -- # '[' -z 55a20086-6560-4e8b-8557-4b8feb8c3dd2 ']' 00:19:28.706 10:33:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:28.706 [2024-07-12 10:33:22.599712] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.706 [2024-07-12 10:33:22.599843] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.706 [2024-07-12 10:33:22.599994] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.706 [2024-07-12 10:33:22.600162] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.706 [2024-07-12 10:33:22.600284] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:28.706 10:33:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.706 10:33:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:28.965 10:33:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:28.965 10:33:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:28.965 10:33:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:28.965 10:33:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:29.224 10:33:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.224 10:33:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:29.482 10:33:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.482 10:33:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:29.741 10:33:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.741 10:33:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:29.741 10:33:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:29.741 10:33:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:29.999 10:33:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:29.999 10:33:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:29.999 10:33:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:29.999 10:33:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:29.999 10:33:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.999 10:33:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.999 10:33:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.999 10:33:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.999 10:33:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.999 10:33:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.999 10:33:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.999 10:33:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:29.999 10:33:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:30.257 [2024-07-12 10:33:23.979931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:30.257 [2024-07-12 10:33:23.981844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:30.257 [2024-07-12 10:33:23.982012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:30.257 [2024-07-12 10:33:23.982093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:30.257 [2024-07-12 10:33:23.982169] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:30.257 [2024-07-12 10:33:23.982326] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:30.257 [2024-07-12 10:33:23.982451] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:30.257 [2024-07-12 10:33:23.982627] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:30.257 [2024-07-12 10:33:23.982686] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.257 [2024-07-12 10:33:23.982775] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:30.257 request: 00:19:30.257 { 00:19:30.257 "name": "raid_bdev1", 00:19:30.257 "raid_level": "concat", 00:19:30.257 "base_bdevs": [ 00:19:30.257 "malloc1", 00:19:30.257 "malloc2", 00:19:30.257 "malloc3", 00:19:30.257 "malloc4" 00:19:30.257 ], 00:19:30.257 "superblock": false, 00:19:30.257 "strip_size_kb": 64, 00:19:30.257 "method": "bdev_raid_create", 00:19:30.257 "req_id": 1 00:19:30.257 } 00:19:30.257 Got JSON-RPC error response 00:19:30.257 response: 00:19:30.257 { 00:19:30.257 "code": -17, 00:19:30.257 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:30.257 } 00:19:30.257 10:33:23 -- common/autotest_common.sh@643 -- # es=1 00:19:30.257 10:33:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:30.257 10:33:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:30.257 10:33:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:30.257 10:33:23 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.257 10:33:23 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:30.515 [2024-07-12 10:33:24.415953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:30.515 [2024-07-12 10:33:24.416142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.515 [2024-07-12 10:33:24.416205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:30.515 [2024-07-12 10:33:24.416317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.515 [2024-07-12 10:33:24.418215] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.515 [2024-07-12 10:33:24.418405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:30.515 [2024-07-12 10:33:24.418587] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:30.515 [2024-07-12 10:33:24.418756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:30.515 pt1 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:30.515 10:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.516 10:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.775 10:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.775 "name": "raid_bdev1", 00:19:30.775 "uuid": "55a20086-6560-4e8b-8557-4b8feb8c3dd2", 00:19:30.775 "strip_size_kb": 64, 00:19:30.775 "state": "configuring", 00:19:30.775 "raid_level": "concat", 00:19:30.775 "superblock": true, 00:19:30.775 "num_base_bdevs": 4, 00:19:30.775 "num_base_bdevs_discovered": 1, 00:19:30.775 "num_base_bdevs_operational": 4, 00:19:30.775 "base_bdevs_list": [ 00:19:30.775 { 00:19:30.775 "name": "pt1", 00:19:30.775 "uuid": "63a227ef-8a17-5819-b54b-42509b67f22e", 00:19:30.775 "is_configured": true, 00:19:30.775 "data_offset": 2048, 00:19:30.775 "data_size": 63488 00:19:30.775 }, 00:19:30.775 { 00:19:30.775 "name": null, 00:19:30.775 "uuid": "15c9ae97-eb13-55f3-bffc-12a0898a4d87", 00:19:30.775 "is_configured": false, 00:19:30.775 "data_offset": 2048, 00:19:30.775 "data_size": 63488 00:19:30.775 }, 00:19:30.775 { 00:19:30.775 "name": null, 00:19:30.775 "uuid": "da5c4504-5a43-5900-abb3-e33535fedf05", 00:19:30.775 "is_configured": false, 00:19:30.775 "data_offset": 2048, 00:19:30.775 "data_size": 63488 00:19:30.775 }, 00:19:30.775 { 00:19:30.775 "name": null, 00:19:30.775 "uuid": "2ead8449-33d7-5d92-8343-5d9e8a05b4a2", 00:19:30.775 "is_configured": false, 00:19:30.775 "data_offset": 2048, 00:19:30.775 "data_size": 63488 00:19:30.775 } 00:19:30.775 ] 00:19:30.775 }' 00:19:30.775 10:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.775 10:33:24 -- common/autotest_common.sh@10 -- # set +x 00:19:31.710 10:33:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:31.710 10:33:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:31.710 [2024-07-12 10:33:25.480176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:31.710 [2024-07-12 10:33:25.480385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.710 [2024-07-12 10:33:25.480465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:31.710 [2024-07-12 10:33:25.480583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.710 [2024-07-12 10:33:25.481168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.710 [2024-07-12 10:33:25.481238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:31.710 [2024-07-12 10:33:25.481474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:31.710 [2024-07-12 10:33:25.481532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.710 pt2 00:19:31.710 10:33:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:31.969 [2024-07-12 10:33:25.676201] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.970 10:33:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.229 10:33:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.229 "name": "raid_bdev1", 00:19:32.229 "uuid": "55a20086-6560-4e8b-8557-4b8feb8c3dd2", 00:19:32.229 "strip_size_kb": 64, 00:19:32.229 "state": "configuring", 00:19:32.229 "raid_level": "concat", 00:19:32.229 "superblock": true, 00:19:32.229 "num_base_bdevs": 4, 00:19:32.229 "num_base_bdevs_discovered": 1, 00:19:32.229 "num_base_bdevs_operational": 4, 00:19:32.229 "base_bdevs_list": [ 00:19:32.229 { 00:19:32.229 "name": "pt1", 00:19:32.229 "uuid": "63a227ef-8a17-5819-b54b-42509b67f22e", 00:19:32.229 "is_configured": true, 00:19:32.229 "data_offset": 2048, 00:19:32.229 "data_size": 63488 00:19:32.229 }, 00:19:32.229 { 00:19:32.229 "name": null, 00:19:32.229 "uuid": "15c9ae97-eb13-55f3-bffc-12a0898a4d87", 00:19:32.229 "is_configured": false, 00:19:32.229 "data_offset": 2048, 00:19:32.229 "data_size": 63488 00:19:32.229 }, 00:19:32.229 { 00:19:32.229 "name": null, 00:19:32.229 "uuid": "da5c4504-5a43-5900-abb3-e33535fedf05", 00:19:32.229 "is_configured": false, 00:19:32.229 "data_offset": 2048, 00:19:32.229 "data_size": 63488 00:19:32.229 }, 00:19:32.229 { 00:19:32.229 "name": null, 00:19:32.229 "uuid": "2ead8449-33d7-5d92-8343-5d9e8a05b4a2", 00:19:32.229 "is_configured": false, 00:19:32.229 "data_offset": 2048, 00:19:32.229 "data_size": 63488 00:19:32.229 } 00:19:32.229 ] 00:19:32.229 }' 00:19:32.229 10:33:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.229 10:33:25 -- common/autotest_common.sh@10 -- # set +x 00:19:32.797 10:33:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:32.797 10:33:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:32.797 10:33:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.055 [2024-07-12 10:33:26.772382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.055 [2024-07-12 10:33:26.772558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.055 [2024-07-12 10:33:26.772623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:33.055 [2024-07-12 10:33:26.772739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.055 [2024-07-12 10:33:26.773146] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.055 [2024-07-12 10:33:26.773303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.055 [2024-07-12 10:33:26.773413] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:33.055 [2024-07-12 10:33:26.773461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.055 pt2 00:19:33.055 10:33:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.055 10:33:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.055 10:33:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:33.313 [2024-07-12 10:33:27.004419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:33.313 [2024-07-12 10:33:27.004593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.313 [2024-07-12 10:33:27.004650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:33.313 [2024-07-12 10:33:27.004756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.313 [2024-07-12 10:33:27.005238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.313 [2024-07-12 10:33:27.005396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:33.313 [2024-07-12 10:33:27.005562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:33.313 [2024-07-12 10:33:27.005671] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:33.313 pt3 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:33.313 [2024-07-12 10:33:27.188455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:33.313 [2024-07-12 10:33:27.188628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.313 [2024-07-12 10:33:27.188693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:33.313 [2024-07-12 10:33:27.188848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.313 [2024-07-12 10:33:27.189252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.313 [2024-07-12 10:33:27.189430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:33.313 [2024-07-12 10:33:27.189541] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:33.313 [2024-07-12 10:33:27.189665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:33.313 [2024-07-12 10:33:27.189879] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:33.313 [2024-07-12 10:33:27.189973] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:33.313 [2024-07-12 10:33:27.190106] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:33.313 [2024-07-12 10:33:27.190455] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:33.313 [2024-07-12 10:33:27.190547] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:33.313 [2024-07-12 10:33:27.190761] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.313 pt4 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.313 10:33:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.571 10:33:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.571 "name": "raid_bdev1", 00:19:33.571 "uuid": "55a20086-6560-4e8b-8557-4b8feb8c3dd2", 00:19:33.571 "strip_size_kb": 64, 00:19:33.571 "state": "online", 00:19:33.571 "raid_level": "concat", 00:19:33.571 "superblock": true, 00:19:33.571 "num_base_bdevs": 4, 00:19:33.571 "num_base_bdevs_discovered": 4, 00:19:33.571 "num_base_bdevs_operational": 4, 00:19:33.571 "base_bdevs_list": [ 00:19:33.571 { 00:19:33.571 "name": "pt1", 00:19:33.571 "uuid": "63a227ef-8a17-5819-b54b-42509b67f22e", 00:19:33.571 "is_configured": true, 00:19:33.571 "data_offset": 2048, 00:19:33.571 "data_size": 63488 00:19:33.571 }, 00:19:33.571 { 00:19:33.571 "name": "pt2", 00:19:33.571 "uuid": "15c9ae97-eb13-55f3-bffc-12a0898a4d87", 00:19:33.571 "is_configured": true, 00:19:33.571 "data_offset": 2048, 00:19:33.571 "data_size": 63488 00:19:33.571 }, 00:19:33.571 { 00:19:33.571 "name": "pt3", 00:19:33.571 "uuid": "da5c4504-5a43-5900-abb3-e33535fedf05", 00:19:33.571 "is_configured": true, 00:19:33.571 "data_offset": 2048, 00:19:33.571 "data_size": 63488 00:19:33.571 }, 00:19:33.571 { 00:19:33.571 "name": "pt4", 00:19:33.571 "uuid": "2ead8449-33d7-5d92-8343-5d9e8a05b4a2", 00:19:33.571 "is_configured": true, 00:19:33.571 "data_offset": 2048, 00:19:33.571 "data_size": 63488 00:19:33.571 } 00:19:33.571 ] 00:19:33.571 }' 00:19:33.571 10:33:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.571 10:33:27 -- common/autotest_common.sh@10 -- # set +x 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:34.503 [2024-07-12 10:33:28.368816] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@430 -- # '[' 55a20086-6560-4e8b-8557-4b8feb8c3dd2 '!=' 55a20086-6560-4e8b-8557-4b8feb8c3dd2 ']' 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:34.503 10:33:28 -- bdev/bdev_raid.sh@511 -- # killprocess 123760 00:19:34.503 10:33:28 -- common/autotest_common.sh@926 -- # '[' -z 123760 ']' 00:19:34.503 10:33:28 -- common/autotest_common.sh@930 -- # kill -0 123760 00:19:34.503 10:33:28 -- common/autotest_common.sh@931 -- # uname 00:19:34.503 10:33:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.503 10:33:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123760 00:19:34.503 killing process with pid 123760 00:19:34.503 10:33:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.503 10:33:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.503 10:33:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123760' 00:19:34.503 10:33:28 -- common/autotest_common.sh@945 -- # kill 123760 00:19:34.503 10:33:28 -- common/autotest_common.sh@950 -- # wait 123760 00:19:34.503 [2024-07-12 10:33:28.412251] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.503 [2024-07-12 10:33:28.412301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.503 [2024-07-12 10:33:28.412353] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.503 [2024-07-12 10:33:28.412362] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:35.070 [2024-07-12 10:33:28.678839] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.006 ************************************ 00:19:36.006 END TEST raid_superblock_test 00:19:36.006 ************************************ 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:36.006 00:19:36.006 real 0m11.184s 00:19:36.006 user 0m19.603s 00:19:36.006 sys 0m1.200s 00:19:36.006 10:33:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.006 10:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:36.006 10:33:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:36.006 10:33:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.006 10:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:36.006 ************************************ 00:19:36.006 START TEST raid_state_function_test 00:19:36.006 ************************************ 00:19:36.006 10:33:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:36.006 Process raid pid: 124096 00:19:36.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=124096 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124096' 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124096 /var/tmp/spdk-raid.sock 00:19:36.006 10:33:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:36.006 10:33:29 -- common/autotest_common.sh@819 -- # '[' -z 124096 ']' 00:19:36.006 10:33:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:36.006 10:33:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.006 10:33:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:36.006 10:33:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.006 10:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:36.006 [2024-07-12 10:33:29.799713] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:36.006 [2024-07-12 10:33:29.799868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.265 [2024-07-12 10:33:29.953752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.265 [2024-07-12 10:33:30.134783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.524 [2024-07-12 10:33:30.323257] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.090 10:33:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.090 10:33:30 -- common/autotest_common.sh@852 -- # return 0 00:19:37.090 10:33:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:37.090 [2024-07-12 10:33:30.994187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.090 [2024-07-12 10:33:30.994275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.090 [2024-07-12 10:33:30.994287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.090 [2024-07-12 10:33:30.994309] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.090 [2024-07-12 10:33:30.994315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.090 [2024-07-12 10:33:30.994351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.090 [2024-07-12 10:33:30.994359] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:37.090 [2024-07-12 10:33:30.994380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.347 "name": "Existed_Raid", 00:19:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.347 "strip_size_kb": 0, 00:19:37.347 "state": "configuring", 00:19:37.347 "raid_level": "raid1", 00:19:37.347 "superblock": false, 00:19:37.347 "num_base_bdevs": 4, 00:19:37.347 "num_base_bdevs_discovered": 0, 00:19:37.347 "num_base_bdevs_operational": 4, 00:19:37.347 "base_bdevs_list": [ 00:19:37.347 { 00:19:37.347 "name": "BaseBdev1", 00:19:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.347 "is_configured": false, 00:19:37.347 "data_offset": 0, 00:19:37.347 "data_size": 0 00:19:37.347 }, 00:19:37.347 { 00:19:37.347 "name": "BaseBdev2", 00:19:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.347 "is_configured": false, 00:19:37.347 "data_offset": 0, 00:19:37.347 "data_size": 0 00:19:37.347 }, 00:19:37.347 { 00:19:37.347 "name": "BaseBdev3", 00:19:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.347 "is_configured": false, 00:19:37.347 "data_offset": 0, 00:19:37.347 "data_size": 0 00:19:37.347 }, 00:19:37.347 { 00:19:37.347 "name": "BaseBdev4", 00:19:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.347 "is_configured": false, 00:19:37.347 "data_offset": 0, 00:19:37.347 "data_size": 0 00:19:37.347 } 00:19:37.347 ] 00:19:37.347 }' 00:19:37.347 10:33:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.347 10:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:37.912 10:33:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:38.171 [2024-07-12 10:33:31.998243] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.171 [2024-07-12 10:33:31.998278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:38.171 10:33:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:38.428 [2024-07-12 10:33:32.266283] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.428 [2024-07-12 10:33:32.266330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.428 [2024-07-12 10:33:32.266340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.428 [2024-07-12 10:33:32.266369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.428 [2024-07-12 10:33:32.266376] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.428 [2024-07-12 10:33:32.266407] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.428 [2024-07-12 10:33:32.266414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:38.428 [2024-07-12 10:33:32.266434] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:38.428 10:33:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:38.686 [2024-07-12 10:33:32.531598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.686 BaseBdev1 00:19:38.686 10:33:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:38.686 10:33:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:38.686 10:33:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:38.686 10:33:32 -- common/autotest_common.sh@889 -- # local i 00:19:38.686 10:33:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:38.686 10:33:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:38.686 10:33:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.943 10:33:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:39.200 [ 00:19:39.200 { 00:19:39.200 "name": "BaseBdev1", 00:19:39.200 "aliases": [ 00:19:39.200 "c65bb12d-fc16-496f-8cfc-899fca127154" 00:19:39.200 ], 00:19:39.200 "product_name": "Malloc disk", 00:19:39.200 "block_size": 512, 00:19:39.200 "num_blocks": 65536, 00:19:39.200 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:39.200 "assigned_rate_limits": { 00:19:39.200 "rw_ios_per_sec": 0, 00:19:39.201 "rw_mbytes_per_sec": 0, 00:19:39.201 "r_mbytes_per_sec": 0, 00:19:39.201 "w_mbytes_per_sec": 0 00:19:39.201 }, 00:19:39.201 "claimed": true, 00:19:39.201 "claim_type": "exclusive_write", 00:19:39.201 "zoned": false, 00:19:39.201 "supported_io_types": { 00:19:39.201 "read": true, 00:19:39.201 "write": true, 00:19:39.201 "unmap": true, 00:19:39.201 "write_zeroes": true, 00:19:39.201 "flush": true, 00:19:39.201 "reset": true, 00:19:39.201 "compare": false, 00:19:39.201 "compare_and_write": false, 00:19:39.201 "abort": true, 00:19:39.201 "nvme_admin": false, 00:19:39.201 "nvme_io": false 00:19:39.201 }, 00:19:39.201 "memory_domains": [ 00:19:39.201 { 00:19:39.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.201 "dma_device_type": 2 00:19:39.201 } 00:19:39.201 ], 00:19:39.201 "driver_specific": {} 00:19:39.201 } 00:19:39.201 ] 00:19:39.201 10:33:32 -- common/autotest_common.sh@895 -- # return 0 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.201 10:33:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.201 10:33:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.201 "name": "Existed_Raid", 00:19:39.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.201 "strip_size_kb": 0, 00:19:39.201 "state": "configuring", 00:19:39.201 "raid_level": "raid1", 00:19:39.201 "superblock": false, 00:19:39.201 "num_base_bdevs": 4, 00:19:39.201 "num_base_bdevs_discovered": 1, 00:19:39.201 "num_base_bdevs_operational": 4, 00:19:39.201 "base_bdevs_list": [ 00:19:39.201 { 00:19:39.201 "name": "BaseBdev1", 00:19:39.201 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:39.201 "is_configured": true, 00:19:39.201 "data_offset": 0, 00:19:39.201 "data_size": 65536 00:19:39.201 }, 00:19:39.201 { 00:19:39.201 "name": "BaseBdev2", 00:19:39.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.201 "is_configured": false, 00:19:39.201 "data_offset": 0, 00:19:39.201 "data_size": 0 00:19:39.201 }, 00:19:39.201 { 00:19:39.201 "name": "BaseBdev3", 00:19:39.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.201 "is_configured": false, 00:19:39.201 "data_offset": 0, 00:19:39.201 "data_size": 0 00:19:39.201 }, 00:19:39.201 { 00:19:39.201 "name": "BaseBdev4", 00:19:39.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.201 "is_configured": false, 00:19:39.201 "data_offset": 0, 00:19:39.201 "data_size": 0 00:19:39.201 } 00:19:39.201 ] 00:19:39.201 }' 00:19:39.201 10:33:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.201 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:19:40.135 10:33:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:40.135 [2024-07-12 10:33:34.015916] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:40.135 [2024-07-12 10:33:34.015951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:40.135 10:33:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:40.135 10:33:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:40.394 [2024-07-12 10:33:34.247997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.394 [2024-07-12 10:33:34.249865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:40.394 [2024-07-12 10:33:34.249943] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:40.394 [2024-07-12 10:33:34.249955] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:40.394 [2024-07-12 10:33:34.249979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:40.394 [2024-07-12 10:33:34.249987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:40.394 [2024-07-12 10:33:34.250003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.394 10:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.651 10:33:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.651 "name": "Existed_Raid", 00:19:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.651 "strip_size_kb": 0, 00:19:40.651 "state": "configuring", 00:19:40.651 "raid_level": "raid1", 00:19:40.651 "superblock": false, 00:19:40.651 "num_base_bdevs": 4, 00:19:40.651 "num_base_bdevs_discovered": 1, 00:19:40.651 "num_base_bdevs_operational": 4, 00:19:40.651 "base_bdevs_list": [ 00:19:40.651 { 00:19:40.651 "name": "BaseBdev1", 00:19:40.651 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:40.651 "is_configured": true, 00:19:40.651 "data_offset": 0, 00:19:40.651 "data_size": 65536 00:19:40.651 }, 00:19:40.651 { 00:19:40.651 "name": "BaseBdev2", 00:19:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.651 "is_configured": false, 00:19:40.651 "data_offset": 0, 00:19:40.651 "data_size": 0 00:19:40.651 }, 00:19:40.651 { 00:19:40.651 "name": "BaseBdev3", 00:19:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.651 "is_configured": false, 00:19:40.651 "data_offset": 0, 00:19:40.651 "data_size": 0 00:19:40.651 }, 00:19:40.651 { 00:19:40.651 "name": "BaseBdev4", 00:19:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.651 "is_configured": false, 00:19:40.651 "data_offset": 0, 00:19:40.651 "data_size": 0 00:19:40.651 } 00:19:40.651 ] 00:19:40.651 }' 00:19:40.651 10:33:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.651 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:19:41.216 10:33:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.780 [2024-07-12 10:33:35.390036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.780 BaseBdev2 00:19:41.780 10:33:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:41.780 10:33:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:41.780 10:33:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:41.780 10:33:35 -- common/autotest_common.sh@889 -- # local i 00:19:41.780 10:33:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:41.780 10:33:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:41.780 10:33:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.780 10:33:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:42.037 [ 00:19:42.037 { 00:19:42.037 "name": "BaseBdev2", 00:19:42.037 "aliases": [ 00:19:42.037 "a2dd54e2-faa1-4119-b2c2-abbd98292246" 00:19:42.037 ], 00:19:42.037 "product_name": "Malloc disk", 00:19:42.037 "block_size": 512, 00:19:42.037 "num_blocks": 65536, 00:19:42.037 "uuid": "a2dd54e2-faa1-4119-b2c2-abbd98292246", 00:19:42.037 "assigned_rate_limits": { 00:19:42.037 "rw_ios_per_sec": 0, 00:19:42.037 "rw_mbytes_per_sec": 0, 00:19:42.037 "r_mbytes_per_sec": 0, 00:19:42.037 "w_mbytes_per_sec": 0 00:19:42.037 }, 00:19:42.037 "claimed": true, 00:19:42.037 "claim_type": "exclusive_write", 00:19:42.037 "zoned": false, 00:19:42.037 "supported_io_types": { 00:19:42.037 "read": true, 00:19:42.037 "write": true, 00:19:42.037 "unmap": true, 00:19:42.037 "write_zeroes": true, 00:19:42.037 "flush": true, 00:19:42.037 "reset": true, 00:19:42.037 "compare": false, 00:19:42.037 "compare_and_write": false, 00:19:42.037 "abort": true, 00:19:42.037 "nvme_admin": false, 00:19:42.037 "nvme_io": false 00:19:42.037 }, 00:19:42.037 "memory_domains": [ 00:19:42.037 { 00:19:42.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.037 "dma_device_type": 2 00:19:42.037 } 00:19:42.037 ], 00:19:42.038 "driver_specific": {} 00:19:42.038 } 00:19:42.038 ] 00:19:42.038 10:33:35 -- common/autotest_common.sh@895 -- # return 0 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.038 10:33:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.296 10:33:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.296 "name": "Existed_Raid", 00:19:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.296 "strip_size_kb": 0, 00:19:42.296 "state": "configuring", 00:19:42.296 "raid_level": "raid1", 00:19:42.296 "superblock": false, 00:19:42.296 "num_base_bdevs": 4, 00:19:42.296 "num_base_bdevs_discovered": 2, 00:19:42.296 "num_base_bdevs_operational": 4, 00:19:42.296 "base_bdevs_list": [ 00:19:42.296 { 00:19:42.296 "name": "BaseBdev1", 00:19:42.296 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:42.296 "is_configured": true, 00:19:42.296 "data_offset": 0, 00:19:42.296 "data_size": 65536 00:19:42.296 }, 00:19:42.296 { 00:19:42.296 "name": "BaseBdev2", 00:19:42.296 "uuid": "a2dd54e2-faa1-4119-b2c2-abbd98292246", 00:19:42.296 "is_configured": true, 00:19:42.296 "data_offset": 0, 00:19:42.296 "data_size": 65536 00:19:42.296 }, 00:19:42.296 { 00:19:42.296 "name": "BaseBdev3", 00:19:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.296 "is_configured": false, 00:19:42.296 "data_offset": 0, 00:19:42.296 "data_size": 0 00:19:42.296 }, 00:19:42.296 { 00:19:42.296 "name": "BaseBdev4", 00:19:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.296 "is_configured": false, 00:19:42.296 "data_offset": 0, 00:19:42.296 "data_size": 0 00:19:42.296 } 00:19:42.296 ] 00:19:42.296 }' 00:19:42.296 10:33:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.296 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:42.862 10:33:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:43.120 [2024-07-12 10:33:36.965660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.120 BaseBdev3 00:19:43.120 10:33:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:43.120 10:33:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:43.120 10:33:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:43.120 10:33:36 -- common/autotest_common.sh@889 -- # local i 00:19:43.120 10:33:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:43.120 10:33:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:43.120 10:33:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.378 10:33:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:43.636 [ 00:19:43.636 { 00:19:43.636 "name": "BaseBdev3", 00:19:43.636 "aliases": [ 00:19:43.636 "d5c82ea3-fab5-429c-8131-72528144c271" 00:19:43.636 ], 00:19:43.636 "product_name": "Malloc disk", 00:19:43.636 "block_size": 512, 00:19:43.636 "num_blocks": 65536, 00:19:43.636 "uuid": "d5c82ea3-fab5-429c-8131-72528144c271", 00:19:43.636 "assigned_rate_limits": { 00:19:43.636 "rw_ios_per_sec": 0, 00:19:43.636 "rw_mbytes_per_sec": 0, 00:19:43.636 "r_mbytes_per_sec": 0, 00:19:43.636 "w_mbytes_per_sec": 0 00:19:43.636 }, 00:19:43.636 "claimed": true, 00:19:43.636 "claim_type": "exclusive_write", 00:19:43.636 "zoned": false, 00:19:43.636 "supported_io_types": { 00:19:43.636 "read": true, 00:19:43.636 "write": true, 00:19:43.636 "unmap": true, 00:19:43.636 "write_zeroes": true, 00:19:43.636 "flush": true, 00:19:43.636 "reset": true, 00:19:43.636 "compare": false, 00:19:43.636 "compare_and_write": false, 00:19:43.636 "abort": true, 00:19:43.636 "nvme_admin": false, 00:19:43.637 "nvme_io": false 00:19:43.637 }, 00:19:43.637 "memory_domains": [ 00:19:43.637 { 00:19:43.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.637 "dma_device_type": 2 00:19:43.637 } 00:19:43.637 ], 00:19:43.637 "driver_specific": {} 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 10:33:37 -- common/autotest_common.sh@895 -- # return 0 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.637 "name": "Existed_Raid", 00:19:43.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.637 "strip_size_kb": 0, 00:19:43.637 "state": "configuring", 00:19:43.637 "raid_level": "raid1", 00:19:43.637 "superblock": false, 00:19:43.637 "num_base_bdevs": 4, 00:19:43.637 "num_base_bdevs_discovered": 3, 00:19:43.637 "num_base_bdevs_operational": 4, 00:19:43.637 "base_bdevs_list": [ 00:19:43.637 { 00:19:43.637 "name": "BaseBdev1", 00:19:43.637 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:43.637 "is_configured": true, 00:19:43.637 "data_offset": 0, 00:19:43.637 "data_size": 65536 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "name": "BaseBdev2", 00:19:43.637 "uuid": "a2dd54e2-faa1-4119-b2c2-abbd98292246", 00:19:43.637 "is_configured": true, 00:19:43.637 "data_offset": 0, 00:19:43.637 "data_size": 65536 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "name": "BaseBdev3", 00:19:43.637 "uuid": "d5c82ea3-fab5-429c-8131-72528144c271", 00:19:43.637 "is_configured": true, 00:19:43.637 "data_offset": 0, 00:19:43.637 "data_size": 65536 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "name": "BaseBdev4", 00:19:43.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.637 "is_configured": false, 00:19:43.637 "data_offset": 0, 00:19:43.637 "data_size": 0 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }' 00:19:43.637 10:33:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.637 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:44.572 10:33:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:44.572 [2024-07-12 10:33:38.377324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.572 [2024-07-12 10:33:38.377376] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:44.572 [2024-07-12 10:33:38.377385] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:44.572 [2024-07-12 10:33:38.377516] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:44.573 [2024-07-12 10:33:38.377857] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:44.573 [2024-07-12 10:33:38.377879] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:44.573 [2024-07-12 10:33:38.378119] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.573 BaseBdev4 00:19:44.573 10:33:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:44.573 10:33:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:44.573 10:33:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:44.573 10:33:38 -- common/autotest_common.sh@889 -- # local i 00:19:44.573 10:33:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:44.573 10:33:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:44.573 10:33:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.831 10:33:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:44.831 [ 00:19:44.831 { 00:19:44.831 "name": "BaseBdev4", 00:19:44.831 "aliases": [ 00:19:44.831 "5672b267-b9c3-4f39-8bb8-84a4ac9e63af" 00:19:44.831 ], 00:19:44.831 "product_name": "Malloc disk", 00:19:44.831 "block_size": 512, 00:19:44.831 "num_blocks": 65536, 00:19:44.831 "uuid": "5672b267-b9c3-4f39-8bb8-84a4ac9e63af", 00:19:44.831 "assigned_rate_limits": { 00:19:44.831 "rw_ios_per_sec": 0, 00:19:44.831 "rw_mbytes_per_sec": 0, 00:19:44.831 "r_mbytes_per_sec": 0, 00:19:44.831 "w_mbytes_per_sec": 0 00:19:44.831 }, 00:19:44.831 "claimed": true, 00:19:44.831 "claim_type": "exclusive_write", 00:19:44.831 "zoned": false, 00:19:44.831 "supported_io_types": { 00:19:44.831 "read": true, 00:19:44.831 "write": true, 00:19:44.831 "unmap": true, 00:19:44.831 "write_zeroes": true, 00:19:44.831 "flush": true, 00:19:44.831 "reset": true, 00:19:44.831 "compare": false, 00:19:44.831 "compare_and_write": false, 00:19:44.831 "abort": true, 00:19:44.831 "nvme_admin": false, 00:19:44.831 "nvme_io": false 00:19:44.831 }, 00:19:44.831 "memory_domains": [ 00:19:44.831 { 00:19:44.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.831 "dma_device_type": 2 00:19:44.831 } 00:19:44.831 ], 00:19:44.831 "driver_specific": {} 00:19:44.831 } 00:19:44.831 ] 00:19:45.115 10:33:38 -- common/autotest_common.sh@895 -- # return 0 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.115 "name": "Existed_Raid", 00:19:45.115 "uuid": "e546bf54-3ac6-470b-bdef-1f17098231ce", 00:19:45.115 "strip_size_kb": 0, 00:19:45.115 "state": "online", 00:19:45.115 "raid_level": "raid1", 00:19:45.115 "superblock": false, 00:19:45.115 "num_base_bdevs": 4, 00:19:45.115 "num_base_bdevs_discovered": 4, 00:19:45.115 "num_base_bdevs_operational": 4, 00:19:45.115 "base_bdevs_list": [ 00:19:45.115 { 00:19:45.115 "name": "BaseBdev1", 00:19:45.115 "uuid": "c65bb12d-fc16-496f-8cfc-899fca127154", 00:19:45.115 "is_configured": true, 00:19:45.115 "data_offset": 0, 00:19:45.115 "data_size": 65536 00:19:45.115 }, 00:19:45.115 { 00:19:45.115 "name": "BaseBdev2", 00:19:45.115 "uuid": "a2dd54e2-faa1-4119-b2c2-abbd98292246", 00:19:45.115 "is_configured": true, 00:19:45.115 "data_offset": 0, 00:19:45.115 "data_size": 65536 00:19:45.115 }, 00:19:45.115 { 00:19:45.115 "name": "BaseBdev3", 00:19:45.115 "uuid": "d5c82ea3-fab5-429c-8131-72528144c271", 00:19:45.115 "is_configured": true, 00:19:45.115 "data_offset": 0, 00:19:45.115 "data_size": 65536 00:19:45.115 }, 00:19:45.115 { 00:19:45.115 "name": "BaseBdev4", 00:19:45.115 "uuid": "5672b267-b9c3-4f39-8bb8-84a4ac9e63af", 00:19:45.115 "is_configured": true, 00:19:45.115 "data_offset": 0, 00:19:45.115 "data_size": 65536 00:19:45.115 } 00:19:45.115 ] 00:19:45.115 }' 00:19:45.115 10:33:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.115 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:45.704 10:33:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.962 [2024-07-12 10:33:39.817638] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.221 10:33:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.480 10:33:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.480 "name": "Existed_Raid", 00:19:46.480 "uuid": "e546bf54-3ac6-470b-bdef-1f17098231ce", 00:19:46.480 "strip_size_kb": 0, 00:19:46.480 "state": "online", 00:19:46.480 "raid_level": "raid1", 00:19:46.480 "superblock": false, 00:19:46.480 "num_base_bdevs": 4, 00:19:46.480 "num_base_bdevs_discovered": 3, 00:19:46.480 "num_base_bdevs_operational": 3, 00:19:46.480 "base_bdevs_list": [ 00:19:46.480 { 00:19:46.480 "name": null, 00:19:46.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.480 "is_configured": false, 00:19:46.480 "data_offset": 0, 00:19:46.480 "data_size": 65536 00:19:46.480 }, 00:19:46.480 { 00:19:46.480 "name": "BaseBdev2", 00:19:46.480 "uuid": "a2dd54e2-faa1-4119-b2c2-abbd98292246", 00:19:46.480 "is_configured": true, 00:19:46.480 "data_offset": 0, 00:19:46.480 "data_size": 65536 00:19:46.480 }, 00:19:46.480 { 00:19:46.480 "name": "BaseBdev3", 00:19:46.480 "uuid": "d5c82ea3-fab5-429c-8131-72528144c271", 00:19:46.480 "is_configured": true, 00:19:46.480 "data_offset": 0, 00:19:46.480 "data_size": 65536 00:19:46.480 }, 00:19:46.480 { 00:19:46.480 "name": "BaseBdev4", 00:19:46.480 "uuid": "5672b267-b9c3-4f39-8bb8-84a4ac9e63af", 00:19:46.480 "is_configured": true, 00:19:46.480 "data_offset": 0, 00:19:46.480 "data_size": 65536 00:19:46.480 } 00:19:46.480 ] 00:19:46.480 }' 00:19:46.480 10:33:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.480 10:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:47.046 10:33:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:47.046 10:33:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:47.046 10:33:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.046 10:33:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:47.304 10:33:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:47.304 10:33:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:47.304 10:33:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:47.563 [2024-07-12 10:33:41.233503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:47.563 10:33:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:47.563 10:33:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:47.563 10:33:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.563 10:33:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:47.821 10:33:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:47.821 10:33:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:47.821 10:33:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:48.080 [2024-07-12 10:33:41.800167] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.080 10:33:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:48.080 10:33:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:48.080 10:33:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:48.080 10:33:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.339 10:33:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:48.339 10:33:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.339 10:33:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:48.597 [2024-07-12 10:33:42.266295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:48.597 [2024-07-12 10:33:42.266334] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.597 [2024-07-12 10:33:42.266394] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.597 [2024-07-12 10:33:42.332654] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.597 [2024-07-12 10:33:42.332689] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:48.597 10:33:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:48.597 10:33:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:48.597 10:33:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.597 10:33:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:48.856 10:33:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:48.857 10:33:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:48.857 10:33:42 -- bdev/bdev_raid.sh@287 -- # killprocess 124096 00:19:48.857 10:33:42 -- common/autotest_common.sh@926 -- # '[' -z 124096 ']' 00:19:48.857 10:33:42 -- common/autotest_common.sh@930 -- # kill -0 124096 00:19:48.857 10:33:42 -- common/autotest_common.sh@931 -- # uname 00:19:48.857 10:33:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:48.857 10:33:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124096 00:19:48.857 killing process with pid 124096 00:19:48.857 10:33:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:48.857 10:33:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:48.857 10:33:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124096' 00:19:48.857 10:33:42 -- common/autotest_common.sh@945 -- # kill 124096 00:19:48.857 10:33:42 -- common/autotest_common.sh@950 -- # wait 124096 00:19:48.857 [2024-07-12 10:33:42.551687] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.857 [2024-07-12 10:33:42.551795] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.793 ************************************ 00:19:49.793 END TEST raid_state_function_test 00:19:49.793 ************************************ 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:49.793 00:19:49.793 real 0m13.824s 00:19:49.793 user 0m24.812s 00:19:49.793 sys 0m1.571s 00:19:49.793 10:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.793 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:49.793 10:33:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:49.793 10:33:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.793 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:49.793 ************************************ 00:19:49.793 START TEST raid_state_function_test_sb 00:19:49.793 ************************************ 00:19:49.793 10:33:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=124547 00:19:49.793 Process raid pid: 124547 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124547' 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124547 /var/tmp/spdk-raid.sock 00:19:49.793 10:33:43 -- common/autotest_common.sh@819 -- # '[' -z 124547 ']' 00:19:49.793 10:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:49.793 10:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:49.793 10:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:49.793 10:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.793 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:49.793 10:33:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:49.793 [2024-07-12 10:33:43.692706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:49.793 [2024-07-12 10:33:43.693038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.051 [2024-07-12 10:33:43.858115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.309 [2024-07-12 10:33:44.037060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.309 [2024-07-12 10:33:44.224708] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.874 10:33:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.874 10:33:44 -- common/autotest_common.sh@852 -- # return 0 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:50.874 [2024-07-12 10:33:44.771654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.874 [2024-07-12 10:33:44.771746] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.874 [2024-07-12 10:33:44.771762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.874 [2024-07-12 10:33:44.771787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.874 [2024-07-12 10:33:44.771795] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.874 [2024-07-12 10:33:44.771833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.874 [2024-07-12 10:33:44.771842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:50.874 [2024-07-12 10:33:44.771864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.874 10:33:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.132 10:33:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.132 "name": "Existed_Raid", 00:19:51.132 "uuid": "a37dafa7-9bec-4996-aaa7-1705487e532f", 00:19:51.132 "strip_size_kb": 0, 00:19:51.132 "state": "configuring", 00:19:51.132 "raid_level": "raid1", 00:19:51.132 "superblock": true, 00:19:51.132 "num_base_bdevs": 4, 00:19:51.132 "num_base_bdevs_discovered": 0, 00:19:51.132 "num_base_bdevs_operational": 4, 00:19:51.132 "base_bdevs_list": [ 00:19:51.132 { 00:19:51.132 "name": "BaseBdev1", 00:19:51.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.132 "is_configured": false, 00:19:51.132 "data_offset": 0, 00:19:51.132 "data_size": 0 00:19:51.132 }, 00:19:51.132 { 00:19:51.132 "name": "BaseBdev2", 00:19:51.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.132 "is_configured": false, 00:19:51.132 "data_offset": 0, 00:19:51.132 "data_size": 0 00:19:51.132 }, 00:19:51.132 { 00:19:51.132 "name": "BaseBdev3", 00:19:51.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.132 "is_configured": false, 00:19:51.132 "data_offset": 0, 00:19:51.132 "data_size": 0 00:19:51.132 }, 00:19:51.132 { 00:19:51.132 "name": "BaseBdev4", 00:19:51.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.132 "is_configured": false, 00:19:51.132 "data_offset": 0, 00:19:51.132 "data_size": 0 00:19:51.132 } 00:19:51.132 ] 00:19:51.132 }' 00:19:51.132 10:33:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.132 10:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:52.066 10:33:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.066 [2024-07-12 10:33:45.819785] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.066 [2024-07-12 10:33:45.819815] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:52.066 10:33:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:52.324 [2024-07-12 10:33:46.059861] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.324 [2024-07-12 10:33:46.059903] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.324 [2024-07-12 10:33:46.059913] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.324 [2024-07-12 10:33:46.059944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.324 [2024-07-12 10:33:46.059952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.324 [2024-07-12 10:33:46.059987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.324 [2024-07-12 10:33:46.059995] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:52.324 [2024-07-12 10:33:46.060016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:52.324 10:33:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:52.583 [2024-07-12 10:33:46.274068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.583 BaseBdev1 00:19:52.583 10:33:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:52.583 10:33:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:52.583 10:33:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:52.583 10:33:46 -- common/autotest_common.sh@889 -- # local i 00:19:52.583 10:33:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:52.583 10:33:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:52.583 10:33:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.583 10:33:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:52.841 [ 00:19:52.841 { 00:19:52.841 "name": "BaseBdev1", 00:19:52.841 "aliases": [ 00:19:52.841 "779b1e66-1bbe-4127-9ff3-4186444c8a73" 00:19:52.841 ], 00:19:52.841 "product_name": "Malloc disk", 00:19:52.841 "block_size": 512, 00:19:52.841 "num_blocks": 65536, 00:19:52.841 "uuid": "779b1e66-1bbe-4127-9ff3-4186444c8a73", 00:19:52.841 "assigned_rate_limits": { 00:19:52.841 "rw_ios_per_sec": 0, 00:19:52.841 "rw_mbytes_per_sec": 0, 00:19:52.841 "r_mbytes_per_sec": 0, 00:19:52.841 "w_mbytes_per_sec": 0 00:19:52.841 }, 00:19:52.841 "claimed": true, 00:19:52.841 "claim_type": "exclusive_write", 00:19:52.841 "zoned": false, 00:19:52.841 "supported_io_types": { 00:19:52.841 "read": true, 00:19:52.841 "write": true, 00:19:52.841 "unmap": true, 00:19:52.841 "write_zeroes": true, 00:19:52.841 "flush": true, 00:19:52.841 "reset": true, 00:19:52.841 "compare": false, 00:19:52.841 "compare_and_write": false, 00:19:52.841 "abort": true, 00:19:52.841 "nvme_admin": false, 00:19:52.841 "nvme_io": false 00:19:52.841 }, 00:19:52.841 "memory_domains": [ 00:19:52.841 { 00:19:52.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.841 "dma_device_type": 2 00:19:52.841 } 00:19:52.841 ], 00:19:52.841 "driver_specific": {} 00:19:52.841 } 00:19:52.841 ] 00:19:52.841 10:33:46 -- common/autotest_common.sh@895 -- # return 0 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.841 10:33:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.099 10:33:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.099 "name": "Existed_Raid", 00:19:53.099 "uuid": "1aecfc25-a14e-4e9d-b5e1-800b3dcf1028", 00:19:53.099 "strip_size_kb": 0, 00:19:53.099 "state": "configuring", 00:19:53.099 "raid_level": "raid1", 00:19:53.099 "superblock": true, 00:19:53.099 "num_base_bdevs": 4, 00:19:53.099 "num_base_bdevs_discovered": 1, 00:19:53.099 "num_base_bdevs_operational": 4, 00:19:53.099 "base_bdevs_list": [ 00:19:53.099 { 00:19:53.099 "name": "BaseBdev1", 00:19:53.099 "uuid": "779b1e66-1bbe-4127-9ff3-4186444c8a73", 00:19:53.099 "is_configured": true, 00:19:53.099 "data_offset": 2048, 00:19:53.099 "data_size": 63488 00:19:53.099 }, 00:19:53.099 { 00:19:53.099 "name": "BaseBdev2", 00:19:53.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.099 "is_configured": false, 00:19:53.099 "data_offset": 0, 00:19:53.099 "data_size": 0 00:19:53.099 }, 00:19:53.099 { 00:19:53.099 "name": "BaseBdev3", 00:19:53.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.099 "is_configured": false, 00:19:53.099 "data_offset": 0, 00:19:53.099 "data_size": 0 00:19:53.099 }, 00:19:53.099 { 00:19:53.099 "name": "BaseBdev4", 00:19:53.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.099 "is_configured": false, 00:19:53.099 "data_offset": 0, 00:19:53.099 "data_size": 0 00:19:53.099 } 00:19:53.100 ] 00:19:53.100 }' 00:19:53.100 10:33:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.100 10:33:46 -- common/autotest_common.sh@10 -- # set +x 00:19:53.666 10:33:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:53.924 [2024-07-12 10:33:47.722288] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.924 [2024-07-12 10:33:47.722328] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:53.924 10:33:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:53.924 10:33:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:54.182 10:33:47 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.440 BaseBdev1 00:19:54.440 10:33:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:54.440 10:33:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:54.440 10:33:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:54.440 10:33:48 -- common/autotest_common.sh@889 -- # local i 00:19:54.440 10:33:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:54.440 10:33:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:54.440 10:33:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.698 10:33:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:54.956 [ 00:19:54.956 { 00:19:54.956 "name": "BaseBdev1", 00:19:54.956 "aliases": [ 00:19:54.956 "00516f96-706f-4fc4-b1c5-8b82ba77427c" 00:19:54.956 ], 00:19:54.956 "product_name": "Malloc disk", 00:19:54.956 "block_size": 512, 00:19:54.956 "num_blocks": 65536, 00:19:54.956 "uuid": "00516f96-706f-4fc4-b1c5-8b82ba77427c", 00:19:54.956 "assigned_rate_limits": { 00:19:54.956 "rw_ios_per_sec": 0, 00:19:54.956 "rw_mbytes_per_sec": 0, 00:19:54.956 "r_mbytes_per_sec": 0, 00:19:54.956 "w_mbytes_per_sec": 0 00:19:54.956 }, 00:19:54.956 "claimed": false, 00:19:54.956 "zoned": false, 00:19:54.956 "supported_io_types": { 00:19:54.957 "read": true, 00:19:54.957 "write": true, 00:19:54.957 "unmap": true, 00:19:54.957 "write_zeroes": true, 00:19:54.957 "flush": true, 00:19:54.957 "reset": true, 00:19:54.957 "compare": false, 00:19:54.957 "compare_and_write": false, 00:19:54.957 "abort": true, 00:19:54.957 "nvme_admin": false, 00:19:54.957 "nvme_io": false 00:19:54.957 }, 00:19:54.957 "memory_domains": [ 00:19:54.957 { 00:19:54.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.957 "dma_device_type": 2 00:19:54.957 } 00:19:54.957 ], 00:19:54.957 "driver_specific": {} 00:19:54.957 } 00:19:54.957 ] 00:19:54.957 10:33:48 -- common/autotest_common.sh@895 -- # return 0 00:19:54.957 10:33:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:54.957 [2024-07-12 10:33:48.872097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.216 [2024-07-12 10:33:48.874151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.216 [2024-07-12 10:33:48.874242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.216 [2024-07-12 10:33:48.874254] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:55.216 [2024-07-12 10:33:48.874281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:55.216 [2024-07-12 10:33:48.874289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:55.216 [2024-07-12 10:33:48.874307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.216 10:33:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.216 10:33:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.216 "name": "Existed_Raid", 00:19:55.216 "uuid": "e2f9f412-3e31-4ed5-b02c-28f271fa3f5c", 00:19:55.217 "strip_size_kb": 0, 00:19:55.217 "state": "configuring", 00:19:55.217 "raid_level": "raid1", 00:19:55.217 "superblock": true, 00:19:55.217 "num_base_bdevs": 4, 00:19:55.217 "num_base_bdevs_discovered": 1, 00:19:55.217 "num_base_bdevs_operational": 4, 00:19:55.217 "base_bdevs_list": [ 00:19:55.217 { 00:19:55.217 "name": "BaseBdev1", 00:19:55.217 "uuid": "00516f96-706f-4fc4-b1c5-8b82ba77427c", 00:19:55.217 "is_configured": true, 00:19:55.217 "data_offset": 2048, 00:19:55.217 "data_size": 63488 00:19:55.217 }, 00:19:55.217 { 00:19:55.217 "name": "BaseBdev2", 00:19:55.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.217 "is_configured": false, 00:19:55.217 "data_offset": 0, 00:19:55.217 "data_size": 0 00:19:55.217 }, 00:19:55.217 { 00:19:55.217 "name": "BaseBdev3", 00:19:55.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.217 "is_configured": false, 00:19:55.217 "data_offset": 0, 00:19:55.217 "data_size": 0 00:19:55.217 }, 00:19:55.217 { 00:19:55.217 "name": "BaseBdev4", 00:19:55.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.217 "is_configured": false, 00:19:55.217 "data_offset": 0, 00:19:55.217 "data_size": 0 00:19:55.217 } 00:19:55.217 ] 00:19:55.217 }' 00:19:55.217 10:33:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.217 10:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:56.153 10:33:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:56.153 [2024-07-12 10:33:50.038506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.153 BaseBdev2 00:19:56.153 10:33:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:56.153 10:33:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:56.153 10:33:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:56.153 10:33:50 -- common/autotest_common.sh@889 -- # local i 00:19:56.153 10:33:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:56.153 10:33:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:56.153 10:33:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:56.436 10:33:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:56.694 [ 00:19:56.694 { 00:19:56.694 "name": "BaseBdev2", 00:19:56.694 "aliases": [ 00:19:56.694 "ae81b037-6d92-4a82-b1c3-9f0bc9d26825" 00:19:56.694 ], 00:19:56.694 "product_name": "Malloc disk", 00:19:56.694 "block_size": 512, 00:19:56.694 "num_blocks": 65536, 00:19:56.694 "uuid": "ae81b037-6d92-4a82-b1c3-9f0bc9d26825", 00:19:56.694 "assigned_rate_limits": { 00:19:56.694 "rw_ios_per_sec": 0, 00:19:56.694 "rw_mbytes_per_sec": 0, 00:19:56.694 "r_mbytes_per_sec": 0, 00:19:56.694 "w_mbytes_per_sec": 0 00:19:56.694 }, 00:19:56.694 "claimed": true, 00:19:56.694 "claim_type": "exclusive_write", 00:19:56.694 "zoned": false, 00:19:56.694 "supported_io_types": { 00:19:56.694 "read": true, 00:19:56.694 "write": true, 00:19:56.694 "unmap": true, 00:19:56.694 "write_zeroes": true, 00:19:56.694 "flush": true, 00:19:56.694 "reset": true, 00:19:56.694 "compare": false, 00:19:56.694 "compare_and_write": false, 00:19:56.694 "abort": true, 00:19:56.694 "nvme_admin": false, 00:19:56.694 "nvme_io": false 00:19:56.694 }, 00:19:56.694 "memory_domains": [ 00:19:56.694 { 00:19:56.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.694 "dma_device_type": 2 00:19:56.694 } 00:19:56.694 ], 00:19:56.694 "driver_specific": {} 00:19:56.694 } 00:19:56.694 ] 00:19:56.694 10:33:50 -- common/autotest_common.sh@895 -- # return 0 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.694 10:33:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.951 10:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.951 "name": "Existed_Raid", 00:19:56.951 "uuid": "e2f9f412-3e31-4ed5-b02c-28f271fa3f5c", 00:19:56.951 "strip_size_kb": 0, 00:19:56.951 "state": "configuring", 00:19:56.951 "raid_level": "raid1", 00:19:56.951 "superblock": true, 00:19:56.951 "num_base_bdevs": 4, 00:19:56.951 "num_base_bdevs_discovered": 2, 00:19:56.951 "num_base_bdevs_operational": 4, 00:19:56.951 "base_bdevs_list": [ 00:19:56.951 { 00:19:56.951 "name": "BaseBdev1", 00:19:56.951 "uuid": "00516f96-706f-4fc4-b1c5-8b82ba77427c", 00:19:56.951 "is_configured": true, 00:19:56.951 "data_offset": 2048, 00:19:56.951 "data_size": 63488 00:19:56.951 }, 00:19:56.951 { 00:19:56.951 "name": "BaseBdev2", 00:19:56.951 "uuid": "ae81b037-6d92-4a82-b1c3-9f0bc9d26825", 00:19:56.951 "is_configured": true, 00:19:56.951 "data_offset": 2048, 00:19:56.951 "data_size": 63488 00:19:56.951 }, 00:19:56.951 { 00:19:56.951 "name": "BaseBdev3", 00:19:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.951 "is_configured": false, 00:19:56.951 "data_offset": 0, 00:19:56.951 "data_size": 0 00:19:56.951 }, 00:19:56.951 { 00:19:56.951 "name": "BaseBdev4", 00:19:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.951 "is_configured": false, 00:19:56.951 "data_offset": 0, 00:19:56.951 "data_size": 0 00:19:56.951 } 00:19:56.951 ] 00:19:56.951 }' 00:19:56.951 10:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.951 10:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:57.516 10:33:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:57.774 [2024-07-12 10:33:51.643276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.774 BaseBdev3 00:19:57.774 10:33:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:57.774 10:33:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:57.774 10:33:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:57.774 10:33:51 -- common/autotest_common.sh@889 -- # local i 00:19:57.774 10:33:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:57.774 10:33:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:57.774 10:33:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.032 10:33:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:58.290 [ 00:19:58.290 { 00:19:58.290 "name": "BaseBdev3", 00:19:58.290 "aliases": [ 00:19:58.290 "4a731cf1-1aef-4ca2-b69e-15baf2ab6dd6" 00:19:58.290 ], 00:19:58.290 "product_name": "Malloc disk", 00:19:58.290 "block_size": 512, 00:19:58.290 "num_blocks": 65536, 00:19:58.290 "uuid": "4a731cf1-1aef-4ca2-b69e-15baf2ab6dd6", 00:19:58.290 "assigned_rate_limits": { 00:19:58.290 "rw_ios_per_sec": 0, 00:19:58.290 "rw_mbytes_per_sec": 0, 00:19:58.290 "r_mbytes_per_sec": 0, 00:19:58.290 "w_mbytes_per_sec": 0 00:19:58.290 }, 00:19:58.290 "claimed": true, 00:19:58.290 "claim_type": "exclusive_write", 00:19:58.290 "zoned": false, 00:19:58.290 "supported_io_types": { 00:19:58.290 "read": true, 00:19:58.290 "write": true, 00:19:58.290 "unmap": true, 00:19:58.290 "write_zeroes": true, 00:19:58.290 "flush": true, 00:19:58.290 "reset": true, 00:19:58.290 "compare": false, 00:19:58.290 "compare_and_write": false, 00:19:58.290 "abort": true, 00:19:58.290 "nvme_admin": false, 00:19:58.290 "nvme_io": false 00:19:58.290 }, 00:19:58.290 "memory_domains": [ 00:19:58.290 { 00:19:58.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.290 "dma_device_type": 2 00:19:58.290 } 00:19:58.290 ], 00:19:58.290 "driver_specific": {} 00:19:58.290 } 00:19:58.290 ] 00:19:58.290 10:33:52 -- common/autotest_common.sh@895 -- # return 0 00:19:58.290 10:33:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:58.290 10:33:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:58.290 10:33:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.291 10:33:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.549 10:33:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.549 "name": "Existed_Raid", 00:19:58.549 "uuid": "e2f9f412-3e31-4ed5-b02c-28f271fa3f5c", 00:19:58.549 "strip_size_kb": 0, 00:19:58.549 "state": "configuring", 00:19:58.549 "raid_level": "raid1", 00:19:58.549 "superblock": true, 00:19:58.549 "num_base_bdevs": 4, 00:19:58.549 "num_base_bdevs_discovered": 3, 00:19:58.549 "num_base_bdevs_operational": 4, 00:19:58.549 "base_bdevs_list": [ 00:19:58.549 { 00:19:58.549 "name": "BaseBdev1", 00:19:58.549 "uuid": "00516f96-706f-4fc4-b1c5-8b82ba77427c", 00:19:58.549 "is_configured": true, 00:19:58.549 "data_offset": 2048, 00:19:58.549 "data_size": 63488 00:19:58.549 }, 00:19:58.549 { 00:19:58.549 "name": "BaseBdev2", 00:19:58.549 "uuid": "ae81b037-6d92-4a82-b1c3-9f0bc9d26825", 00:19:58.549 "is_configured": true, 00:19:58.549 "data_offset": 2048, 00:19:58.549 "data_size": 63488 00:19:58.549 }, 00:19:58.549 { 00:19:58.549 "name": "BaseBdev3", 00:19:58.549 "uuid": "4a731cf1-1aef-4ca2-b69e-15baf2ab6dd6", 00:19:58.549 "is_configured": true, 00:19:58.549 "data_offset": 2048, 00:19:58.549 "data_size": 63488 00:19:58.549 }, 00:19:58.549 { 00:19:58.549 "name": "BaseBdev4", 00:19:58.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.549 "is_configured": false, 00:19:58.549 "data_offset": 0, 00:19:58.549 "data_size": 0 00:19:58.549 } 00:19:58.549 ] 00:19:58.549 }' 00:19:58.549 10:33:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.549 10:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:59.483 10:33:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:59.483 [2024-07-12 10:33:53.271243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:59.483 [2024-07-12 10:33:53.271513] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:59.483 [2024-07-12 10:33:53.271561] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:59.483 [2024-07-12 10:33:53.271709] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:59.483 [2024-07-12 10:33:53.272110] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:59.483 [2024-07-12 10:33:53.272130] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:59.483 [2024-07-12 10:33:53.272292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.483 BaseBdev4 00:19:59.483 10:33:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:59.483 10:33:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:59.483 10:33:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:59.483 10:33:53 -- common/autotest_common.sh@889 -- # local i 00:19:59.483 10:33:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:59.483 10:33:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:59.483 10:33:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.741 10:33:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:59.741 [ 00:19:59.741 { 00:19:59.741 "name": "BaseBdev4", 00:19:59.741 "aliases": [ 00:19:59.741 "73b5b7d2-7017-4c9d-9e68-3aa809db4b3f" 00:19:59.741 ], 00:19:59.741 "product_name": "Malloc disk", 00:19:59.741 "block_size": 512, 00:19:59.741 "num_blocks": 65536, 00:19:59.741 "uuid": "73b5b7d2-7017-4c9d-9e68-3aa809db4b3f", 00:19:59.741 "assigned_rate_limits": { 00:19:59.741 "rw_ios_per_sec": 0, 00:19:59.741 "rw_mbytes_per_sec": 0, 00:19:59.741 "r_mbytes_per_sec": 0, 00:19:59.741 "w_mbytes_per_sec": 0 00:19:59.741 }, 00:19:59.741 "claimed": true, 00:19:59.741 "claim_type": "exclusive_write", 00:19:59.741 "zoned": false, 00:19:59.741 "supported_io_types": { 00:19:59.741 "read": true, 00:19:59.741 "write": true, 00:19:59.741 "unmap": true, 00:19:59.741 "write_zeroes": true, 00:19:59.741 "flush": true, 00:19:59.741 "reset": true, 00:19:59.741 "compare": false, 00:19:59.741 "compare_and_write": false, 00:19:59.741 "abort": true, 00:19:59.741 "nvme_admin": false, 00:19:59.741 "nvme_io": false 00:19:59.741 }, 00:19:59.741 "memory_domains": [ 00:19:59.741 { 00:19:59.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.741 "dma_device_type": 2 00:19:59.741 } 00:19:59.741 ], 00:19:59.741 "driver_specific": {} 00:19:59.741 } 00:19:59.741 ] 00:19:59.741 10:33:53 -- common/autotest_common.sh@895 -- # return 0 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.741 10:33:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.000 10:33:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.000 "name": "Existed_Raid", 00:20:00.000 "uuid": "e2f9f412-3e31-4ed5-b02c-28f271fa3f5c", 00:20:00.000 "strip_size_kb": 0, 00:20:00.000 "state": "online", 00:20:00.000 "raid_level": "raid1", 00:20:00.000 "superblock": true, 00:20:00.000 "num_base_bdevs": 4, 00:20:00.000 "num_base_bdevs_discovered": 4, 00:20:00.000 "num_base_bdevs_operational": 4, 00:20:00.000 "base_bdevs_list": [ 00:20:00.000 { 00:20:00.000 "name": "BaseBdev1", 00:20:00.000 "uuid": "00516f96-706f-4fc4-b1c5-8b82ba77427c", 00:20:00.000 "is_configured": true, 00:20:00.000 "data_offset": 2048, 00:20:00.000 "data_size": 63488 00:20:00.000 }, 00:20:00.000 { 00:20:00.000 "name": "BaseBdev2", 00:20:00.000 "uuid": "ae81b037-6d92-4a82-b1c3-9f0bc9d26825", 00:20:00.000 "is_configured": true, 00:20:00.000 "data_offset": 2048, 00:20:00.000 "data_size": 63488 00:20:00.000 }, 00:20:00.000 { 00:20:00.000 "name": "BaseBdev3", 00:20:00.000 "uuid": "4a731cf1-1aef-4ca2-b69e-15baf2ab6dd6", 00:20:00.000 "is_configured": true, 00:20:00.000 "data_offset": 2048, 00:20:00.000 "data_size": 63488 00:20:00.000 }, 00:20:00.000 { 00:20:00.000 "name": "BaseBdev4", 00:20:00.000 "uuid": "73b5b7d2-7017-4c9d-9e68-3aa809db4b3f", 00:20:00.000 "is_configured": true, 00:20:00.000 "data_offset": 2048, 00:20:00.000 "data_size": 63488 00:20:00.000 } 00:20:00.000 ] 00:20:00.000 }' 00:20:00.000 10:33:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.000 10:33:53 -- common/autotest_common.sh@10 -- # set +x 00:20:00.566 10:33:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:00.825 [2024-07-12 10:33:54.675702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.083 "name": "Existed_Raid", 00:20:01.083 "uuid": "e2f9f412-3e31-4ed5-b02c-28f271fa3f5c", 00:20:01.083 "strip_size_kb": 0, 00:20:01.083 "state": "online", 00:20:01.083 "raid_level": "raid1", 00:20:01.083 "superblock": true, 00:20:01.083 "num_base_bdevs": 4, 00:20:01.083 "num_base_bdevs_discovered": 3, 00:20:01.083 "num_base_bdevs_operational": 3, 00:20:01.083 "base_bdevs_list": [ 00:20:01.083 { 00:20:01.083 "name": null, 00:20:01.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.083 "is_configured": false, 00:20:01.083 "data_offset": 2048, 00:20:01.083 "data_size": 63488 00:20:01.083 }, 00:20:01.083 { 00:20:01.083 "name": "BaseBdev2", 00:20:01.083 "uuid": "ae81b037-6d92-4a82-b1c3-9f0bc9d26825", 00:20:01.083 "is_configured": true, 00:20:01.083 "data_offset": 2048, 00:20:01.083 "data_size": 63488 00:20:01.083 }, 00:20:01.083 { 00:20:01.083 "name": "BaseBdev3", 00:20:01.083 "uuid": "4a731cf1-1aef-4ca2-b69e-15baf2ab6dd6", 00:20:01.083 "is_configured": true, 00:20:01.083 "data_offset": 2048, 00:20:01.083 "data_size": 63488 00:20:01.083 }, 00:20:01.083 { 00:20:01.083 "name": "BaseBdev4", 00:20:01.083 "uuid": "73b5b7d2-7017-4c9d-9e68-3aa809db4b3f", 00:20:01.083 "is_configured": true, 00:20:01.083 "data_offset": 2048, 00:20:01.083 "data_size": 63488 00:20:01.083 } 00:20:01.083 ] 00:20:01.083 }' 00:20:01.083 10:33:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.083 10:33:54 -- common/autotest_common.sh@10 -- # set +x 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.020 10:33:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:02.279 [2024-07-12 10:33:56.119951] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.537 10:33:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.538 10:33:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:02.796 [2024-07-12 10:33:56.667891] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:03.054 10:33:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:03.312 [2024-07-12 10:33:57.192388] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:03.312 [2024-07-12 10:33:57.192424] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.312 [2024-07-12 10:33:57.192494] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.570 [2024-07-12 10:33:57.260709] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.570 [2024-07-12 10:33:57.260743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:03.570 10:33:57 -- bdev/bdev_raid.sh@287 -- # killprocess 124547 00:20:03.570 10:33:57 -- common/autotest_common.sh@926 -- # '[' -z 124547 ']' 00:20:03.570 10:33:57 -- common/autotest_common.sh@930 -- # kill -0 124547 00:20:03.570 10:33:57 -- common/autotest_common.sh@931 -- # uname 00:20:03.570 10:33:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.570 10:33:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124547 00:20:03.570 killing process with pid 124547 00:20:03.570 10:33:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.570 10:33:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.570 10:33:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124547' 00:20:03.570 10:33:57 -- common/autotest_common.sh@945 -- # kill 124547 00:20:03.570 10:33:57 -- common/autotest_common.sh@950 -- # wait 124547 00:20:03.570 [2024-07-12 10:33:57.473191] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.570 [2024-07-12 10:33:57.473291] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.945 ************************************ 00:20:04.945 END TEST raid_state_function_test_sb 00:20:04.945 ************************************ 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:04.945 00:20:04.945 real 0m14.875s 00:20:04.945 user 0m26.762s 00:20:04.945 sys 0m1.629s 00:20:04.945 10:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.945 10:33:58 -- common/autotest_common.sh@10 -- # set +x 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:04.945 10:33:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:04.945 10:33:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:04.945 10:33:58 -- common/autotest_common.sh@10 -- # set +x 00:20:04.945 ************************************ 00:20:04.945 START TEST raid_superblock_test 00:20:04.945 ************************************ 00:20:04.945 10:33:58 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@357 -- # raid_pid=125028 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125028 /var/tmp/spdk-raid.sock 00:20:04.945 10:33:58 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:04.945 10:33:58 -- common/autotest_common.sh@819 -- # '[' -z 125028 ']' 00:20:04.945 10:33:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:04.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:04.945 10:33:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.945 10:33:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:04.945 10:33:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.945 10:33:58 -- common/autotest_common.sh@10 -- # set +x 00:20:04.945 [2024-07-12 10:33:58.627754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:04.946 [2024-07-12 10:33:58.628346] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125028 ] 00:20:04.946 [2024-07-12 10:33:58.805174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.204 [2024-07-12 10:33:59.050670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.462 [2024-07-12 10:33:59.237132] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.721 10:33:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.721 10:33:59 -- common/autotest_common.sh@852 -- # return 0 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:05.721 10:33:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:05.979 malloc1 00:20:05.979 10:33:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.237 [2024-07-12 10:33:59.947109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.237 [2024-07-12 10:33:59.947200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.237 [2024-07-12 10:33:59.947232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:06.237 [2024-07-12 10:33:59.947278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.237 [2024-07-12 10:33:59.949626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.237 [2024-07-12 10:33:59.949674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.237 pt1 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.238 10:33:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:06.495 malloc2 00:20:06.495 10:34:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.753 [2024-07-12 10:34:00.412344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.753 [2024-07-12 10:34:00.412419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.753 [2024-07-12 10:34:00.412490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:06.753 [2024-07-12 10:34:00.412536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.753 [2024-07-12 10:34:00.414631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.753 [2024-07-12 10:34:00.414673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.753 pt2 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:06.753 malloc3 00:20:06.753 10:34:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:07.010 [2024-07-12 10:34:00.808843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:07.010 [2024-07-12 10:34:00.808910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.010 [2024-07-12 10:34:00.808948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:07.010 [2024-07-12 10:34:00.808989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.010 [2024-07-12 10:34:00.810780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.010 [2024-07-12 10:34:00.810826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:07.010 pt3 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.010 10:34:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.011 10:34:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.011 10:34:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:07.268 malloc4 00:20:07.268 10:34:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:07.525 [2024-07-12 10:34:01.214358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:07.525 [2024-07-12 10:34:01.214444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.525 [2024-07-12 10:34:01.214486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:07.525 [2024-07-12 10:34:01.214526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.525 [2024-07-12 10:34:01.216419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.525 [2024-07-12 10:34:01.216465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:07.525 pt4 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:07.525 [2024-07-12 10:34:01.390417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.525 [2024-07-12 10:34:01.391942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.525 [2024-07-12 10:34:01.392012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:07.525 [2024-07-12 10:34:01.392068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:07.525 [2024-07-12 10:34:01.392263] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:07.525 [2024-07-12 10:34:01.392279] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:07.525 [2024-07-12 10:34:01.392399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:07.525 [2024-07-12 10:34:01.392705] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:07.525 [2024-07-12 10:34:01.392726] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:07.525 [2024-07-12 10:34:01.392845] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.525 10:34:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.783 10:34:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.783 "name": "raid_bdev1", 00:20:07.783 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:07.783 "strip_size_kb": 0, 00:20:07.783 "state": "online", 00:20:07.783 "raid_level": "raid1", 00:20:07.783 "superblock": true, 00:20:07.783 "num_base_bdevs": 4, 00:20:07.783 "num_base_bdevs_discovered": 4, 00:20:07.783 "num_base_bdevs_operational": 4, 00:20:07.783 "base_bdevs_list": [ 00:20:07.783 { 00:20:07.783 "name": "pt1", 00:20:07.783 "uuid": "75256c86-c737-5e57-b113-4d86f1d9813a", 00:20:07.783 "is_configured": true, 00:20:07.783 "data_offset": 2048, 00:20:07.783 "data_size": 63488 00:20:07.783 }, 00:20:07.783 { 00:20:07.783 "name": "pt2", 00:20:07.783 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:07.783 "is_configured": true, 00:20:07.783 "data_offset": 2048, 00:20:07.783 "data_size": 63488 00:20:07.783 }, 00:20:07.783 { 00:20:07.783 "name": "pt3", 00:20:07.783 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:07.783 "is_configured": true, 00:20:07.783 "data_offset": 2048, 00:20:07.783 "data_size": 63488 00:20:07.783 }, 00:20:07.783 { 00:20:07.783 "name": "pt4", 00:20:07.783 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:07.783 "is_configured": true, 00:20:07.783 "data_offset": 2048, 00:20:07.783 "data_size": 63488 00:20:07.783 } 00:20:07.783 ] 00:20:07.783 }' 00:20:07.783 10:34:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.783 10:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:08.347 10:34:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:08.347 10:34:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:08.605 [2024-07-12 10:34:02.448532] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.605 10:34:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c1fe357d-547f-40dc-97a6-5d361276d85f 00:20:08.605 10:34:02 -- bdev/bdev_raid.sh@380 -- # '[' -z c1fe357d-547f-40dc-97a6-5d361276d85f ']' 00:20:08.605 10:34:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:08.863 [2024-07-12 10:34:02.708336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.863 [2024-07-12 10:34:02.708359] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.863 [2024-07-12 10:34:02.708446] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.863 [2024-07-12 10:34:02.708529] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.863 [2024-07-12 10:34:02.708539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:08.863 10:34:02 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.863 10:34:02 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:09.121 10:34:02 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:09.121 10:34:02 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:09.121 10:34:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.121 10:34:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:09.378 10:34:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.378 10:34:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:09.635 10:34:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.635 10:34:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:09.892 10:34:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.892 10:34:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:09.892 10:34:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:09.892 10:34:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:10.149 10:34:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:10.150 10:34:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:10.150 10:34:03 -- common/autotest_common.sh@640 -- # local es=0 00:20:10.150 10:34:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:10.150 10:34:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.150 10:34:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:10.150 10:34:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.150 10:34:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:10.150 10:34:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.150 10:34:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:10.150 10:34:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.150 10:34:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:10.150 10:34:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:10.407 [2024-07-12 10:34:04.140531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:10.407 [2024-07-12 10:34:04.142300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:10.407 [2024-07-12 10:34:04.142357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:10.407 [2024-07-12 10:34:04.142399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:10.407 [2024-07-12 10:34:04.142452] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:10.407 [2024-07-12 10:34:04.142521] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:10.407 [2024-07-12 10:34:04.142555] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:10.407 [2024-07-12 10:34:04.142608] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:10.407 [2024-07-12 10:34:04.142632] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.407 [2024-07-12 10:34:04.142643] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:20:10.407 request: 00:20:10.407 { 00:20:10.407 "name": "raid_bdev1", 00:20:10.407 "raid_level": "raid1", 00:20:10.407 "base_bdevs": [ 00:20:10.407 "malloc1", 00:20:10.407 "malloc2", 00:20:10.407 "malloc3", 00:20:10.407 "malloc4" 00:20:10.407 ], 00:20:10.407 "superblock": false, 00:20:10.407 "method": "bdev_raid_create", 00:20:10.407 "req_id": 1 00:20:10.407 } 00:20:10.407 Got JSON-RPC error response 00:20:10.407 response: 00:20:10.407 { 00:20:10.407 "code": -17, 00:20:10.407 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:10.407 } 00:20:10.407 10:34:04 -- common/autotest_common.sh@643 -- # es=1 00:20:10.407 10:34:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:10.407 10:34:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:10.407 10:34:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:10.407 10:34:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.407 10:34:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.664 [2024-07-12 10:34:04.540548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.664 [2024-07-12 10:34:04.540607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.664 [2024-07-12 10:34:04.540636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:10.664 [2024-07-12 10:34:04.540659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.664 [2024-07-12 10:34:04.542613] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.664 [2024-07-12 10:34:04.542674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.664 [2024-07-12 10:34:04.542762] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:10.664 [2024-07-12 10:34:04.542815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.664 pt1 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.664 10:34:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.922 10:34:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.922 "name": "raid_bdev1", 00:20:10.922 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:10.922 "strip_size_kb": 0, 00:20:10.922 "state": "configuring", 00:20:10.922 "raid_level": "raid1", 00:20:10.922 "superblock": true, 00:20:10.922 "num_base_bdevs": 4, 00:20:10.922 "num_base_bdevs_discovered": 1, 00:20:10.922 "num_base_bdevs_operational": 4, 00:20:10.922 "base_bdevs_list": [ 00:20:10.922 { 00:20:10.922 "name": "pt1", 00:20:10.922 "uuid": "75256c86-c737-5e57-b113-4d86f1d9813a", 00:20:10.922 "is_configured": true, 00:20:10.922 "data_offset": 2048, 00:20:10.922 "data_size": 63488 00:20:10.922 }, 00:20:10.922 { 00:20:10.922 "name": null, 00:20:10.922 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:10.922 "is_configured": false, 00:20:10.922 "data_offset": 2048, 00:20:10.922 "data_size": 63488 00:20:10.922 }, 00:20:10.922 { 00:20:10.922 "name": null, 00:20:10.922 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:10.922 "is_configured": false, 00:20:10.922 "data_offset": 2048, 00:20:10.922 "data_size": 63488 00:20:10.922 }, 00:20:10.922 { 00:20:10.922 "name": null, 00:20:10.922 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:10.922 "is_configured": false, 00:20:10.922 "data_offset": 2048, 00:20:10.922 "data_size": 63488 00:20:10.922 } 00:20:10.922 ] 00:20:10.922 }' 00:20:10.922 10:34:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.922 10:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:11.487 10:34:05 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:11.487 10:34:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.746 [2024-07-12 10:34:05.584781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.746 [2024-07-12 10:34:05.584857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.746 [2024-07-12 10:34:05.584898] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:11.746 [2024-07-12 10:34:05.584916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.746 [2024-07-12 10:34:05.585363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.746 [2024-07-12 10:34:05.585401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.746 [2024-07-12 10:34:05.585500] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:11.746 [2024-07-12 10:34:05.585534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.746 pt2 00:20:11.746 10:34:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:12.004 [2024-07-12 10:34:05.832784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:12.004 10:34:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:12.004 10:34:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.004 10:34:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:12.004 10:34:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.004 10:34:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.005 10:34:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.263 10:34:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.263 "name": "raid_bdev1", 00:20:12.263 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:12.263 "strip_size_kb": 0, 00:20:12.263 "state": "configuring", 00:20:12.263 "raid_level": "raid1", 00:20:12.263 "superblock": true, 00:20:12.263 "num_base_bdevs": 4, 00:20:12.263 "num_base_bdevs_discovered": 1, 00:20:12.263 "num_base_bdevs_operational": 4, 00:20:12.263 "base_bdevs_list": [ 00:20:12.263 { 00:20:12.263 "name": "pt1", 00:20:12.263 "uuid": "75256c86-c737-5e57-b113-4d86f1d9813a", 00:20:12.263 "is_configured": true, 00:20:12.263 "data_offset": 2048, 00:20:12.263 "data_size": 63488 00:20:12.263 }, 00:20:12.263 { 00:20:12.263 "name": null, 00:20:12.263 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:12.263 "is_configured": false, 00:20:12.263 "data_offset": 2048, 00:20:12.263 "data_size": 63488 00:20:12.263 }, 00:20:12.263 { 00:20:12.263 "name": null, 00:20:12.263 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:12.263 "is_configured": false, 00:20:12.263 "data_offset": 2048, 00:20:12.263 "data_size": 63488 00:20:12.263 }, 00:20:12.263 { 00:20:12.263 "name": null, 00:20:12.263 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:12.263 "is_configured": false, 00:20:12.263 "data_offset": 2048, 00:20:12.263 "data_size": 63488 00:20:12.263 } 00:20:12.263 ] 00:20:12.263 }' 00:20:12.263 10:34:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.263 10:34:06 -- common/autotest_common.sh@10 -- # set +x 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.198 [2024-07-12 10:34:06.975771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.198 [2024-07-12 10:34:06.975834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.198 [2024-07-12 10:34:06.975873] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:13.198 [2024-07-12 10:34:06.975891] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.198 [2024-07-12 10:34:06.976229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.198 [2024-07-12 10:34:06.976280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.198 [2024-07-12 10:34:06.976363] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:13.198 [2024-07-12 10:34:06.976386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.198 pt2 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:13.198 10:34:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:13.455 [2024-07-12 10:34:07.247856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:13.455 [2024-07-12 10:34:07.247913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.455 [2024-07-12 10:34:07.247942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:13.455 [2024-07-12 10:34:07.247963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.455 [2024-07-12 10:34:07.248285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.455 [2024-07-12 10:34:07.248337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:13.455 [2024-07-12 10:34:07.248409] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:13.455 [2024-07-12 10:34:07.248430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:13.455 pt3 00:20:13.455 10:34:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:13.455 10:34:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:13.455 10:34:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:13.712 [2024-07-12 10:34:07.511920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:13.712 [2024-07-12 10:34:07.511975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.712 [2024-07-12 10:34:07.512000] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:13.712 [2024-07-12 10:34:07.512020] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.712 [2024-07-12 10:34:07.512342] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.712 [2024-07-12 10:34:07.512391] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:13.712 [2024-07-12 10:34:07.512468] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:13.712 [2024-07-12 10:34:07.512490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:13.712 [2024-07-12 10:34:07.512618] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:13.712 [2024-07-12 10:34:07.512632] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:13.712 [2024-07-12 10:34:07.512725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:13.712 [2024-07-12 10:34:07.513033] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:13.712 [2024-07-12 10:34:07.513052] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:13.712 [2024-07-12 10:34:07.513167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.712 pt4 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.712 10:34:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.969 10:34:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.969 "name": "raid_bdev1", 00:20:13.969 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:13.969 "strip_size_kb": 0, 00:20:13.969 "state": "online", 00:20:13.969 "raid_level": "raid1", 00:20:13.969 "superblock": true, 00:20:13.969 "num_base_bdevs": 4, 00:20:13.969 "num_base_bdevs_discovered": 4, 00:20:13.969 "num_base_bdevs_operational": 4, 00:20:13.969 "base_bdevs_list": [ 00:20:13.969 { 00:20:13.969 "name": "pt1", 00:20:13.969 "uuid": "75256c86-c737-5e57-b113-4d86f1d9813a", 00:20:13.969 "is_configured": true, 00:20:13.969 "data_offset": 2048, 00:20:13.969 "data_size": 63488 00:20:13.969 }, 00:20:13.969 { 00:20:13.969 "name": "pt2", 00:20:13.969 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:13.969 "is_configured": true, 00:20:13.969 "data_offset": 2048, 00:20:13.969 "data_size": 63488 00:20:13.969 }, 00:20:13.969 { 00:20:13.969 "name": "pt3", 00:20:13.969 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:13.969 "is_configured": true, 00:20:13.969 "data_offset": 2048, 00:20:13.969 "data_size": 63488 00:20:13.969 }, 00:20:13.969 { 00:20:13.969 "name": "pt4", 00:20:13.969 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:13.969 "is_configured": true, 00:20:13.969 "data_offset": 2048, 00:20:13.969 "data_size": 63488 00:20:13.969 } 00:20:13.969 ] 00:20:13.969 }' 00:20:13.969 10:34:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.969 10:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:14.575 10:34:08 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:14.575 10:34:08 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:14.833 [2024-07-12 10:34:08.581972] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.833 10:34:08 -- bdev/bdev_raid.sh@430 -- # '[' c1fe357d-547f-40dc-97a6-5d361276d85f '!=' c1fe357d-547f-40dc-97a6-5d361276d85f ']' 00:20:14.833 10:34:08 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:14.833 10:34:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:14.833 10:34:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:14.833 10:34:08 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:15.092 [2024-07-12 10:34:08.761337] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.092 "name": "raid_bdev1", 00:20:15.092 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:15.092 "strip_size_kb": 0, 00:20:15.092 "state": "online", 00:20:15.092 "raid_level": "raid1", 00:20:15.092 "superblock": true, 00:20:15.092 "num_base_bdevs": 4, 00:20:15.092 "num_base_bdevs_discovered": 3, 00:20:15.092 "num_base_bdevs_operational": 3, 00:20:15.092 "base_bdevs_list": [ 00:20:15.092 { 00:20:15.092 "name": null, 00:20:15.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.092 "is_configured": false, 00:20:15.092 "data_offset": 2048, 00:20:15.092 "data_size": 63488 00:20:15.092 }, 00:20:15.092 { 00:20:15.092 "name": "pt2", 00:20:15.092 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:15.092 "is_configured": true, 00:20:15.092 "data_offset": 2048, 00:20:15.092 "data_size": 63488 00:20:15.092 }, 00:20:15.092 { 00:20:15.092 "name": "pt3", 00:20:15.092 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:15.092 "is_configured": true, 00:20:15.092 "data_offset": 2048, 00:20:15.092 "data_size": 63488 00:20:15.092 }, 00:20:15.092 { 00:20:15.092 "name": "pt4", 00:20:15.092 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:15.092 "is_configured": true, 00:20:15.092 "data_offset": 2048, 00:20:15.092 "data_size": 63488 00:20:15.092 } 00:20:15.092 ] 00:20:15.092 }' 00:20:15.092 10:34:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.092 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:20:16.028 10:34:09 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:16.028 [2024-07-12 10:34:09.865463] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.028 [2024-07-12 10:34:09.865487] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.028 [2024-07-12 10:34:09.865539] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.028 [2024-07-12 10:34:09.865607] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:16.028 [2024-07-12 10:34:09.865618] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:16.028 10:34:09 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.028 10:34:09 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:16.287 10:34:10 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:16.287 10:34:10 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:16.287 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:16.287 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:16.287 10:34:10 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:16.545 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:16.545 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:16.545 10:34:10 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:16.804 10:34:10 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.063 [2024-07-12 10:34:10.845612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.063 [2024-07-12 10:34:10.845687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.063 [2024-07-12 10:34:10.845722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:17.063 [2024-07-12 10:34:10.845749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.063 [2024-07-12 10:34:10.847851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.063 [2024-07-12 10:34:10.847913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.063 [2024-07-12 10:34:10.848016] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:17.063 [2024-07-12 10:34:10.848069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.063 pt2 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.063 10:34:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.321 10:34:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.321 "name": "raid_bdev1", 00:20:17.321 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:17.321 "strip_size_kb": 0, 00:20:17.322 "state": "configuring", 00:20:17.322 "raid_level": "raid1", 00:20:17.322 "superblock": true, 00:20:17.322 "num_base_bdevs": 4, 00:20:17.322 "num_base_bdevs_discovered": 1, 00:20:17.322 "num_base_bdevs_operational": 3, 00:20:17.322 "base_bdevs_list": [ 00:20:17.322 { 00:20:17.322 "name": null, 00:20:17.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.322 "is_configured": false, 00:20:17.322 "data_offset": 2048, 00:20:17.322 "data_size": 63488 00:20:17.322 }, 00:20:17.322 { 00:20:17.322 "name": "pt2", 00:20:17.322 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:17.322 "is_configured": true, 00:20:17.322 "data_offset": 2048, 00:20:17.322 "data_size": 63488 00:20:17.322 }, 00:20:17.322 { 00:20:17.322 "name": null, 00:20:17.322 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:17.322 "is_configured": false, 00:20:17.322 "data_offset": 2048, 00:20:17.322 "data_size": 63488 00:20:17.322 }, 00:20:17.322 { 00:20:17.322 "name": null, 00:20:17.322 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:17.322 "is_configured": false, 00:20:17.322 "data_offset": 2048, 00:20:17.322 "data_size": 63488 00:20:17.322 } 00:20:17.322 ] 00:20:17.322 }' 00:20:17.322 10:34:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.322 10:34:11 -- common/autotest_common.sh@10 -- # set +x 00:20:17.890 10:34:11 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:17.890 10:34:11 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:17.890 10:34:11 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:18.148 [2024-07-12 10:34:11.869789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:18.148 [2024-07-12 10:34:11.869859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.148 [2024-07-12 10:34:11.869897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:18.148 [2024-07-12 10:34:11.869923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.148 [2024-07-12 10:34:11.870360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.148 [2024-07-12 10:34:11.870419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:18.148 [2024-07-12 10:34:11.870512] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:18.148 [2024-07-12 10:34:11.870539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:18.148 pt3 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.148 10:34:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.407 10:34:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.407 "name": "raid_bdev1", 00:20:18.407 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:18.407 "strip_size_kb": 0, 00:20:18.407 "state": "configuring", 00:20:18.407 "raid_level": "raid1", 00:20:18.407 "superblock": true, 00:20:18.407 "num_base_bdevs": 4, 00:20:18.407 "num_base_bdevs_discovered": 2, 00:20:18.407 "num_base_bdevs_operational": 3, 00:20:18.407 "base_bdevs_list": [ 00:20:18.407 { 00:20:18.407 "name": null, 00:20:18.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.407 "is_configured": false, 00:20:18.407 "data_offset": 2048, 00:20:18.407 "data_size": 63488 00:20:18.407 }, 00:20:18.407 { 00:20:18.407 "name": "pt2", 00:20:18.407 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:18.407 "is_configured": true, 00:20:18.407 "data_offset": 2048, 00:20:18.407 "data_size": 63488 00:20:18.407 }, 00:20:18.407 { 00:20:18.407 "name": "pt3", 00:20:18.407 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:18.407 "is_configured": true, 00:20:18.407 "data_offset": 2048, 00:20:18.407 "data_size": 63488 00:20:18.407 }, 00:20:18.407 { 00:20:18.407 "name": null, 00:20:18.407 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:18.407 "is_configured": false, 00:20:18.407 "data_offset": 2048, 00:20:18.407 "data_size": 63488 00:20:18.407 } 00:20:18.407 ] 00:20:18.407 }' 00:20:18.407 10:34:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.407 10:34:12 -- common/autotest_common.sh@10 -- # set +x 00:20:18.975 10:34:12 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:18.975 10:34:12 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:18.975 10:34:12 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:18.975 10:34:12 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:19.238 [2024-07-12 10:34:12.978012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:19.238 [2024-07-12 10:34:12.978074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.238 [2024-07-12 10:34:12.978112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:19.238 [2024-07-12 10:34:12.978131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.238 [2024-07-12 10:34:12.978561] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.238 [2024-07-12 10:34:12.978617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:19.238 [2024-07-12 10:34:12.978730] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:19.238 [2024-07-12 10:34:12.978756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:19.238 [2024-07-12 10:34:12.978889] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:20:19.238 [2024-07-12 10:34:12.978901] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:19.238 [2024-07-12 10:34:12.979055] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:19.238 [2024-07-12 10:34:12.979459] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:20:19.238 [2024-07-12 10:34:12.979480] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:20:19.238 [2024-07-12 10:34:12.979643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.238 pt4 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.238 10:34:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.498 10:34:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.498 "name": "raid_bdev1", 00:20:19.498 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:19.498 "strip_size_kb": 0, 00:20:19.498 "state": "online", 00:20:19.498 "raid_level": "raid1", 00:20:19.498 "superblock": true, 00:20:19.498 "num_base_bdevs": 4, 00:20:19.498 "num_base_bdevs_discovered": 3, 00:20:19.498 "num_base_bdevs_operational": 3, 00:20:19.498 "base_bdevs_list": [ 00:20:19.498 { 00:20:19.498 "name": null, 00:20:19.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.498 "is_configured": false, 00:20:19.498 "data_offset": 2048, 00:20:19.498 "data_size": 63488 00:20:19.498 }, 00:20:19.498 { 00:20:19.498 "name": "pt2", 00:20:19.498 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:19.498 "is_configured": true, 00:20:19.498 "data_offset": 2048, 00:20:19.498 "data_size": 63488 00:20:19.498 }, 00:20:19.498 { 00:20:19.498 "name": "pt3", 00:20:19.498 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:19.498 "is_configured": true, 00:20:19.498 "data_offset": 2048, 00:20:19.498 "data_size": 63488 00:20:19.498 }, 00:20:19.498 { 00:20:19.498 "name": "pt4", 00:20:19.498 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:19.498 "is_configured": true, 00:20:19.498 "data_offset": 2048, 00:20:19.498 "data_size": 63488 00:20:19.498 } 00:20:19.498 ] 00:20:19.498 }' 00:20:19.498 10:34:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.498 10:34:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.063 10:34:13 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:20.063 10:34:13 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:20.063 [2024-07-12 10:34:13.930132] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:20.063 [2024-07-12 10:34:13.930155] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.063 [2024-07-12 10:34:13.930203] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.063 [2024-07-12 10:34:13.930261] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.063 [2024-07-12 10:34:13.930271] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:20:20.063 10:34:13 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:20.063 10:34:13 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.321 10:34:14 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:20.321 10:34:14 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:20.321 10:34:14 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:20.579 [2024-07-12 10:34:14.406228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:20.579 [2024-07-12 10:34:14.406286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.579 [2024-07-12 10:34:14.406319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:20.579 [2024-07-12 10:34:14.406339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.579 [2024-07-12 10:34:14.408510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.579 [2024-07-12 10:34:14.408568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:20.579 [2024-07-12 10:34:14.408650] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:20.579 [2024-07-12 10:34:14.408696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:20.579 pt1 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.579 10:34:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.837 10:34:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.837 "name": "raid_bdev1", 00:20:20.837 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:20.837 "strip_size_kb": 0, 00:20:20.837 "state": "configuring", 00:20:20.837 "raid_level": "raid1", 00:20:20.837 "superblock": true, 00:20:20.837 "num_base_bdevs": 4, 00:20:20.837 "num_base_bdevs_discovered": 1, 00:20:20.837 "num_base_bdevs_operational": 4, 00:20:20.837 "base_bdevs_list": [ 00:20:20.837 { 00:20:20.837 "name": "pt1", 00:20:20.837 "uuid": "75256c86-c737-5e57-b113-4d86f1d9813a", 00:20:20.837 "is_configured": true, 00:20:20.837 "data_offset": 2048, 00:20:20.837 "data_size": 63488 00:20:20.837 }, 00:20:20.837 { 00:20:20.837 "name": null, 00:20:20.837 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:20.837 "is_configured": false, 00:20:20.837 "data_offset": 2048, 00:20:20.837 "data_size": 63488 00:20:20.837 }, 00:20:20.837 { 00:20:20.837 "name": null, 00:20:20.837 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:20.837 "is_configured": false, 00:20:20.837 "data_offset": 2048, 00:20:20.837 "data_size": 63488 00:20:20.837 }, 00:20:20.837 { 00:20:20.837 "name": null, 00:20:20.837 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:20.837 "is_configured": false, 00:20:20.837 "data_offset": 2048, 00:20:20.838 "data_size": 63488 00:20:20.838 } 00:20:20.838 ] 00:20:20.838 }' 00:20:20.838 10:34:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.838 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.404 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:21.404 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:21.404 10:34:15 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:21.662 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:21.662 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:21.662 10:34:15 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:21.921 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:21.921 10:34:15 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:21.921 10:34:15 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:22.179 10:34:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:22.179 10:34:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:22.179 10:34:16 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:22.179 10:34:16 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:22.437 [2024-07-12 10:34:16.250665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:22.437 [2024-07-12 10:34:16.250795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.437 [2024-07-12 10:34:16.250850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:20:22.437 [2024-07-12 10:34:16.250912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.437 [2024-07-12 10:34:16.251610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.437 [2024-07-12 10:34:16.251669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:22.437 [2024-07-12 10:34:16.251775] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:22.437 [2024-07-12 10:34:16.251791] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:22.437 [2024-07-12 10:34:16.251798] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.437 [2024-07-12 10:34:16.251816] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:20:22.437 [2024-07-12 10:34:16.251878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:22.437 pt4 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.437 10:34:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.696 10:34:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.696 "name": "raid_bdev1", 00:20:22.696 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:22.696 "strip_size_kb": 0, 00:20:22.696 "state": "configuring", 00:20:22.696 "raid_level": "raid1", 00:20:22.696 "superblock": true, 00:20:22.696 "num_base_bdevs": 4, 00:20:22.696 "num_base_bdevs_discovered": 1, 00:20:22.696 "num_base_bdevs_operational": 3, 00:20:22.696 "base_bdevs_list": [ 00:20:22.696 { 00:20:22.696 "name": null, 00:20:22.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.696 "is_configured": false, 00:20:22.696 "data_offset": 2048, 00:20:22.696 "data_size": 63488 00:20:22.696 }, 00:20:22.696 { 00:20:22.696 "name": null, 00:20:22.696 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:22.696 "is_configured": false, 00:20:22.696 "data_offset": 2048, 00:20:22.696 "data_size": 63488 00:20:22.696 }, 00:20:22.696 { 00:20:22.696 "name": null, 00:20:22.696 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:22.696 "is_configured": false, 00:20:22.696 "data_offset": 2048, 00:20:22.696 "data_size": 63488 00:20:22.696 }, 00:20:22.696 { 00:20:22.696 "name": "pt4", 00:20:22.696 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:22.696 "is_configured": true, 00:20:22.696 "data_offset": 2048, 00:20:22.696 "data_size": 63488 00:20:22.696 } 00:20:22.696 ] 00:20:22.696 }' 00:20:22.696 10:34:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.696 10:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.262 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:23.262 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:23.262 10:34:17 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.521 [2024-07-12 10:34:17.390733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.521 [2024-07-12 10:34:17.390852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.521 [2024-07-12 10:34:17.390900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:23.521 [2024-07-12 10:34:17.390928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.521 [2024-07-12 10:34:17.391446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.521 [2024-07-12 10:34:17.391553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.521 [2024-07-12 10:34:17.391665] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:23.521 [2024-07-12 10:34:17.391692] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.521 pt2 00:20:23.521 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:23.521 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:23.521 10:34:17 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:23.779 [2024-07-12 10:34:17.650780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:23.779 [2024-07-12 10:34:17.650842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.779 [2024-07-12 10:34:17.650870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:20:23.779 [2024-07-12 10:34:17.650905] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.779 [2024-07-12 10:34:17.651317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.779 [2024-07-12 10:34:17.651389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:23.779 [2024-07-12 10:34:17.651510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:23.779 [2024-07-12 10:34:17.651538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:23.779 [2024-07-12 10:34:17.651670] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:20:23.779 [2024-07-12 10:34:17.651694] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.779 [2024-07-12 10:34:17.651805] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:20:23.779 [2024-07-12 10:34:17.652117] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:20:23.779 [2024-07-12 10:34:17.652142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:20:23.779 [2024-07-12 10:34:17.652276] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.779 pt3 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.779 10:34:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.037 10:34:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.037 "name": "raid_bdev1", 00:20:24.037 "uuid": "c1fe357d-547f-40dc-97a6-5d361276d85f", 00:20:24.037 "strip_size_kb": 0, 00:20:24.037 "state": "online", 00:20:24.037 "raid_level": "raid1", 00:20:24.037 "superblock": true, 00:20:24.037 "num_base_bdevs": 4, 00:20:24.037 "num_base_bdevs_discovered": 3, 00:20:24.038 "num_base_bdevs_operational": 3, 00:20:24.038 "base_bdevs_list": [ 00:20:24.038 { 00:20:24.038 "name": null, 00:20:24.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.038 "is_configured": false, 00:20:24.038 "data_offset": 2048, 00:20:24.038 "data_size": 63488 00:20:24.038 }, 00:20:24.038 { 00:20:24.038 "name": "pt2", 00:20:24.038 "uuid": "5cdc1065-06cb-5ca7-bfa2-dc923f5f628c", 00:20:24.038 "is_configured": true, 00:20:24.038 "data_offset": 2048, 00:20:24.038 "data_size": 63488 00:20:24.038 }, 00:20:24.038 { 00:20:24.038 "name": "pt3", 00:20:24.038 "uuid": "14d2e258-b12c-52b7-b08f-d7dc0fb8655c", 00:20:24.038 "is_configured": true, 00:20:24.038 "data_offset": 2048, 00:20:24.038 "data_size": 63488 00:20:24.038 }, 00:20:24.038 { 00:20:24.038 "name": "pt4", 00:20:24.038 "uuid": "857efe55-59e2-5489-bf7a-90e1a2304bc8", 00:20:24.038 "is_configured": true, 00:20:24.038 "data_offset": 2048, 00:20:24.038 "data_size": 63488 00:20:24.038 } 00:20:24.038 ] 00:20:24.038 }' 00:20:24.038 10:34:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.038 10:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.973 10:34:18 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:24.973 10:34:18 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:24.973 [2024-07-12 10:34:18.767157] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.973 10:34:18 -- bdev/bdev_raid.sh@506 -- # '[' c1fe357d-547f-40dc-97a6-5d361276d85f '!=' c1fe357d-547f-40dc-97a6-5d361276d85f ']' 00:20:24.973 10:34:18 -- bdev/bdev_raid.sh@511 -- # killprocess 125028 00:20:24.973 10:34:18 -- common/autotest_common.sh@926 -- # '[' -z 125028 ']' 00:20:24.973 10:34:18 -- common/autotest_common.sh@930 -- # kill -0 125028 00:20:24.973 10:34:18 -- common/autotest_common.sh@931 -- # uname 00:20:24.973 10:34:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:24.973 10:34:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125028 00:20:24.973 killing process with pid 125028 00:20:24.973 10:34:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:24.973 10:34:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:24.973 10:34:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125028' 00:20:24.973 10:34:18 -- common/autotest_common.sh@945 -- # kill 125028 00:20:24.973 10:34:18 -- common/autotest_common.sh@950 -- # wait 125028 00:20:24.973 [2024-07-12 10:34:18.802040] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.973 [2024-07-12 10:34:18.802141] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.973 [2024-07-12 10:34:18.802257] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.973 [2024-07-12 10:34:18.802282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:20:25.232 [2024-07-12 10:34:19.137276] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.608 ************************************ 00:20:26.608 END TEST raid_superblock_test 00:20:26.608 ************************************ 00:20:26.608 10:34:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:26.608 00:20:26.608 real 0m21.589s 00:20:26.608 user 0m39.841s 00:20:26.608 sys 0m2.378s 00:20:26.608 10:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.608 10:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:26.608 10:34:20 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:26.608 10:34:20 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:26.609 10:34:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:26.609 10:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:26.609 10:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:26.609 ************************************ 00:20:26.609 START TEST raid_rebuild_test 00:20:26.609 ************************************ 00:20:26.609 10:34:20 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=125734 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125734 /var/tmp/spdk-raid.sock 00:20:26.609 10:34:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.609 10:34:20 -- common/autotest_common.sh@819 -- # '[' -z 125734 ']' 00:20:26.609 10:34:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.609 10:34:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.609 10:34:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.609 10:34:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.609 10:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:26.609 [2024-07-12 10:34:20.265881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:26.609 [2024-07-12 10:34:20.266595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125734 ] 00:20:26.609 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.609 Zero copy mechanism will not be used. 00:20:26.609 [2024-07-12 10:34:20.422240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.867 [2024-07-12 10:34:20.639915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.125 [2024-07-12 10:34:20.803965] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.383 10:34:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.383 10:34:21 -- common/autotest_common.sh@852 -- # return 0 00:20:27.383 10:34:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:27.383 10:34:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:27.383 10:34:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:27.642 BaseBdev1 00:20:27.642 10:34:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:27.642 10:34:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:27.642 10:34:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:27.642 BaseBdev2 00:20:27.901 10:34:21 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:27.901 spare_malloc 00:20:27.901 10:34:21 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:28.159 spare_delay 00:20:28.159 10:34:21 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:28.417 [2024-07-12 10:34:22.157382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:28.417 [2024-07-12 10:34:22.157474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.417 [2024-07-12 10:34:22.157508] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:28.417 [2024-07-12 10:34:22.157552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.417 [2024-07-12 10:34:22.159735] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.417 [2024-07-12 10:34:22.159798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:28.417 spare 00:20:28.417 10:34:22 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:28.673 [2024-07-12 10:34:22.341468] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.673 [2024-07-12 10:34:22.343423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.673 [2024-07-12 10:34:22.343507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:28.673 [2024-07-12 10:34:22.343519] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:28.673 [2024-07-12 10:34:22.343683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:28.673 [2024-07-12 10:34:22.344011] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:28.673 [2024-07-12 10:34:22.344036] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:28.673 [2024-07-12 10:34:22.344179] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.673 10:34:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.673 "name": "raid_bdev1", 00:20:28.673 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:28.674 "strip_size_kb": 0, 00:20:28.674 "state": "online", 00:20:28.674 "raid_level": "raid1", 00:20:28.674 "superblock": false, 00:20:28.674 "num_base_bdevs": 2, 00:20:28.674 "num_base_bdevs_discovered": 2, 00:20:28.674 "num_base_bdevs_operational": 2, 00:20:28.674 "base_bdevs_list": [ 00:20:28.674 { 00:20:28.674 "name": "BaseBdev1", 00:20:28.674 "uuid": "0c6a5b36-c9f5-4e71-9149-961ae4f3c1f5", 00:20:28.674 "is_configured": true, 00:20:28.674 "data_offset": 0, 00:20:28.674 "data_size": 65536 00:20:28.674 }, 00:20:28.674 { 00:20:28.674 "name": "BaseBdev2", 00:20:28.674 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:28.674 "is_configured": true, 00:20:28.674 "data_offset": 0, 00:20:28.674 "data_size": 65536 00:20:28.674 } 00:20:28.674 ] 00:20:28.674 }' 00:20:28.674 10:34:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.674 10:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:29.607 10:34:23 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:29.607 10:34:23 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:29.607 [2024-07-12 10:34:23.349743] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.607 10:34:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:29.607 10:34:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.607 10:34:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:29.865 10:34:23 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:29.866 10:34:23 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:29.866 10:34:23 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:29.866 10:34:23 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@12 -- # local i 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:29.866 10:34:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:30.123 [2024-07-12 10:34:23.813705] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:30.123 /dev/nbd0 00:20:30.123 10:34:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:30.123 10:34:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:30.123 10:34:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:30.123 10:34:23 -- common/autotest_common.sh@857 -- # local i 00:20:30.123 10:34:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:30.123 10:34:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:30.123 10:34:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:30.123 10:34:23 -- common/autotest_common.sh@861 -- # break 00:20:30.123 10:34:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:30.123 10:34:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:30.123 10:34:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.123 1+0 records in 00:20:30.123 1+0 records out 00:20:30.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424395 s, 9.7 MB/s 00:20:30.124 10:34:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.124 10:34:23 -- common/autotest_common.sh@874 -- # size=4096 00:20:30.124 10:34:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.124 10:34:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:30.124 10:34:23 -- common/autotest_common.sh@877 -- # return 0 00:20:30.124 10:34:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:30.124 10:34:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:30.124 10:34:23 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:30.124 10:34:23 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:30.124 10:34:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:34.304 65536+0 records in 00:20:34.304 65536+0 records out 00:20:34.304 33554432 bytes (34 MB, 32 MiB) copied, 4.34954 s, 7.7 MB/s 00:20:34.304 10:34:28 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@51 -- # local i 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.304 10:34:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.562 10:34:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:34.562 [2024-07-12 10:34:28.476057] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.819 10:34:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:34.819 10:34:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.819 10:34:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.819 10:34:28 -- bdev/nbd_common.sh@41 -- # break 00:20:34.819 10:34:28 -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.819 10:34:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:35.077 [2024-07-12 10:34:28.751677] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.077 "name": "raid_bdev1", 00:20:35.077 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:35.077 "strip_size_kb": 0, 00:20:35.077 "state": "online", 00:20:35.077 "raid_level": "raid1", 00:20:35.077 "superblock": false, 00:20:35.077 "num_base_bdevs": 2, 00:20:35.077 "num_base_bdevs_discovered": 1, 00:20:35.077 "num_base_bdevs_operational": 1, 00:20:35.077 "base_bdevs_list": [ 00:20:35.077 { 00:20:35.077 "name": null, 00:20:35.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.077 "is_configured": false, 00:20:35.077 "data_offset": 0, 00:20:35.077 "data_size": 65536 00:20:35.077 }, 00:20:35.077 { 00:20:35.077 "name": "BaseBdev2", 00:20:35.077 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:35.077 "is_configured": true, 00:20:35.077 "data_offset": 0, 00:20:35.077 "data_size": 65536 00:20:35.077 } 00:20:35.077 ] 00:20:35.077 }' 00:20:35.077 10:34:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.077 10:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.011 10:34:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.269 [2024-07-12 10:34:29.944356] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:36.269 [2024-07-12 10:34:29.944411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.269 [2024-07-12 10:34:29.956236] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:20:36.269 [2024-07-12 10:34:29.957891] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.269 10:34:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.205 10:34:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.462 10:34:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.462 "name": "raid_bdev1", 00:20:37.462 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:37.462 "strip_size_kb": 0, 00:20:37.462 "state": "online", 00:20:37.462 "raid_level": "raid1", 00:20:37.462 "superblock": false, 00:20:37.462 "num_base_bdevs": 2, 00:20:37.462 "num_base_bdevs_discovered": 2, 00:20:37.462 "num_base_bdevs_operational": 2, 00:20:37.462 "process": { 00:20:37.462 "type": "rebuild", 00:20:37.462 "target": "spare", 00:20:37.462 "progress": { 00:20:37.462 "blocks": 22528, 00:20:37.462 "percent": 34 00:20:37.462 } 00:20:37.462 }, 00:20:37.462 "base_bdevs_list": [ 00:20:37.462 { 00:20:37.462 "name": "spare", 00:20:37.462 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:37.462 "is_configured": true, 00:20:37.462 "data_offset": 0, 00:20:37.462 "data_size": 65536 00:20:37.462 }, 00:20:37.462 { 00:20:37.462 "name": "BaseBdev2", 00:20:37.462 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:37.462 "is_configured": true, 00:20:37.462 "data_offset": 0, 00:20:37.462 "data_size": 65536 00:20:37.462 } 00:20:37.462 ] 00:20:37.462 }' 00:20:37.462 10:34:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.462 10:34:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.462 10:34:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.462 10:34:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.463 10:34:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:37.720 [2024-07-12 10:34:31.479889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.720 [2024-07-12 10:34:31.567375] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.720 [2024-07-12 10:34:31.567486] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.720 10:34:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.977 10:34:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.977 "name": "raid_bdev1", 00:20:37.977 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:37.977 "strip_size_kb": 0, 00:20:37.977 "state": "online", 00:20:37.977 "raid_level": "raid1", 00:20:37.977 "superblock": false, 00:20:37.977 "num_base_bdevs": 2, 00:20:37.977 "num_base_bdevs_discovered": 1, 00:20:37.977 "num_base_bdevs_operational": 1, 00:20:37.977 "base_bdevs_list": [ 00:20:37.977 { 00:20:37.977 "name": null, 00:20:37.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.977 "is_configured": false, 00:20:37.977 "data_offset": 0, 00:20:37.977 "data_size": 65536 00:20:37.977 }, 00:20:37.977 { 00:20:37.977 "name": "BaseBdev2", 00:20:37.977 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:37.977 "is_configured": true, 00:20:37.977 "data_offset": 0, 00:20:37.977 "data_size": 65536 00:20:37.977 } 00:20:37.977 ] 00:20:37.977 }' 00:20:37.977 10:34:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.977 10:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.544 10:34:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.803 10:34:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:38.803 "name": "raid_bdev1", 00:20:38.803 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:38.803 "strip_size_kb": 0, 00:20:38.803 "state": "online", 00:20:38.803 "raid_level": "raid1", 00:20:38.803 "superblock": false, 00:20:38.803 "num_base_bdevs": 2, 00:20:38.803 "num_base_bdevs_discovered": 1, 00:20:38.803 "num_base_bdevs_operational": 1, 00:20:38.803 "base_bdevs_list": [ 00:20:38.803 { 00:20:38.803 "name": null, 00:20:38.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.803 "is_configured": false, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 }, 00:20:38.803 { 00:20:38.803 "name": "BaseBdev2", 00:20:38.803 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:38.803 "is_configured": true, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 } 00:20:38.803 ] 00:20:38.803 }' 00:20:38.803 10:34:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.062 10:34:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:39.062 10:34:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.062 10:34:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:39.062 10:34:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:39.319 [2024-07-12 10:34:33.033250] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:39.319 [2024-07-12 10:34:33.033290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.319 [2024-07-12 10:34:33.043784] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:39.319 [2024-07-12 10:34:33.045650] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.319 10:34:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.255 10:34:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:40.514 "name": "raid_bdev1", 00:20:40.514 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:40.514 "strip_size_kb": 0, 00:20:40.514 "state": "online", 00:20:40.514 "raid_level": "raid1", 00:20:40.514 "superblock": false, 00:20:40.514 "num_base_bdevs": 2, 00:20:40.514 "num_base_bdevs_discovered": 2, 00:20:40.514 "num_base_bdevs_operational": 2, 00:20:40.514 "process": { 00:20:40.514 "type": "rebuild", 00:20:40.514 "target": "spare", 00:20:40.514 "progress": { 00:20:40.514 "blocks": 24576, 00:20:40.514 "percent": 37 00:20:40.514 } 00:20:40.514 }, 00:20:40.514 "base_bdevs_list": [ 00:20:40.514 { 00:20:40.514 "name": "spare", 00:20:40.514 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:40.514 "is_configured": true, 00:20:40.514 "data_offset": 0, 00:20:40.514 "data_size": 65536 00:20:40.514 }, 00:20:40.514 { 00:20:40.514 "name": "BaseBdev2", 00:20:40.514 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:40.514 "is_configured": true, 00:20:40.514 "data_offset": 0, 00:20:40.514 "data_size": 65536 00:20:40.514 } 00:20:40.514 ] 00:20:40.514 }' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@657 -- # local timeout=387 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.514 10:34:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.773 10:34:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:40.773 "name": "raid_bdev1", 00:20:40.773 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:40.773 "strip_size_kb": 0, 00:20:40.773 "state": "online", 00:20:40.773 "raid_level": "raid1", 00:20:40.773 "superblock": false, 00:20:40.773 "num_base_bdevs": 2, 00:20:40.773 "num_base_bdevs_discovered": 2, 00:20:40.773 "num_base_bdevs_operational": 2, 00:20:40.773 "process": { 00:20:40.773 "type": "rebuild", 00:20:40.773 "target": "spare", 00:20:40.773 "progress": { 00:20:40.773 "blocks": 30720, 00:20:40.773 "percent": 46 00:20:40.773 } 00:20:40.773 }, 00:20:40.773 "base_bdevs_list": [ 00:20:40.773 { 00:20:40.773 "name": "spare", 00:20:40.773 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:40.773 "is_configured": true, 00:20:40.773 "data_offset": 0, 00:20:40.773 "data_size": 65536 00:20:40.773 }, 00:20:40.773 { 00:20:40.773 "name": "BaseBdev2", 00:20:40.773 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:40.773 "is_configured": true, 00:20:40.773 "data_offset": 0, 00:20:40.773 "data_size": 65536 00:20:40.773 } 00:20:40.773 ] 00:20:40.773 }' 00:20:40.773 10:34:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:41.031 10:34:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.031 10:34:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:41.031 10:34:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.031 10:34:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.967 10:34:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.225 10:34:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:42.225 "name": "raid_bdev1", 00:20:42.225 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:42.225 "strip_size_kb": 0, 00:20:42.225 "state": "online", 00:20:42.225 "raid_level": "raid1", 00:20:42.225 "superblock": false, 00:20:42.225 "num_base_bdevs": 2, 00:20:42.225 "num_base_bdevs_discovered": 2, 00:20:42.225 "num_base_bdevs_operational": 2, 00:20:42.225 "process": { 00:20:42.225 "type": "rebuild", 00:20:42.225 "target": "spare", 00:20:42.225 "progress": { 00:20:42.225 "blocks": 59392, 00:20:42.225 "percent": 90 00:20:42.225 } 00:20:42.225 }, 00:20:42.225 "base_bdevs_list": [ 00:20:42.225 { 00:20:42.225 "name": "spare", 00:20:42.225 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:42.225 "is_configured": true, 00:20:42.225 "data_offset": 0, 00:20:42.225 "data_size": 65536 00:20:42.225 }, 00:20:42.225 { 00:20:42.225 "name": "BaseBdev2", 00:20:42.225 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:42.225 "is_configured": true, 00:20:42.225 "data_offset": 0, 00:20:42.225 "data_size": 65536 00:20:42.225 } 00:20:42.225 ] 00:20:42.225 }' 00:20:42.225 10:34:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:42.225 10:34:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.225 10:34:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:42.225 10:34:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.226 10:34:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:42.484 [2024-07-12 10:34:36.264936] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:42.484 [2024-07-12 10:34:36.265016] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:42.484 [2024-07-12 10:34:36.265089] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.420 10:34:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.679 "name": "raid_bdev1", 00:20:43.679 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:43.679 "strip_size_kb": 0, 00:20:43.679 "state": "online", 00:20:43.679 "raid_level": "raid1", 00:20:43.679 "superblock": false, 00:20:43.679 "num_base_bdevs": 2, 00:20:43.679 "num_base_bdevs_discovered": 2, 00:20:43.679 "num_base_bdevs_operational": 2, 00:20:43.679 "base_bdevs_list": [ 00:20:43.679 { 00:20:43.679 "name": "spare", 00:20:43.679 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:43.679 "is_configured": true, 00:20:43.679 "data_offset": 0, 00:20:43.679 "data_size": 65536 00:20:43.679 }, 00:20:43.679 { 00:20:43.679 "name": "BaseBdev2", 00:20:43.679 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:43.679 "is_configured": true, 00:20:43.679 "data_offset": 0, 00:20:43.679 "data_size": 65536 00:20:43.679 } 00:20:43.679 ] 00:20:43.679 }' 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@660 -- # break 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.679 10:34:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.938 10:34:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.938 "name": "raid_bdev1", 00:20:43.938 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:43.938 "strip_size_kb": 0, 00:20:43.938 "state": "online", 00:20:43.938 "raid_level": "raid1", 00:20:43.938 "superblock": false, 00:20:43.938 "num_base_bdevs": 2, 00:20:43.938 "num_base_bdevs_discovered": 2, 00:20:43.938 "num_base_bdevs_operational": 2, 00:20:43.938 "base_bdevs_list": [ 00:20:43.938 { 00:20:43.938 "name": "spare", 00:20:43.938 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:43.938 "is_configured": true, 00:20:43.938 "data_offset": 0, 00:20:43.938 "data_size": 65536 00:20:43.938 }, 00:20:43.938 { 00:20:43.938 "name": "BaseBdev2", 00:20:43.938 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:43.938 "is_configured": true, 00:20:43.938 "data_offset": 0, 00:20:43.938 "data_size": 65536 00:20:43.938 } 00:20:43.938 ] 00:20:43.938 }' 00:20:43.938 10:34:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.938 10:34:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:43.938 10:34:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.197 10:34:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.455 10:34:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.455 "name": "raid_bdev1", 00:20:44.455 "uuid": "731454cd-b84a-41e6-890a-b303f6256767", 00:20:44.455 "strip_size_kb": 0, 00:20:44.455 "state": "online", 00:20:44.455 "raid_level": "raid1", 00:20:44.455 "superblock": false, 00:20:44.455 "num_base_bdevs": 2, 00:20:44.455 "num_base_bdevs_discovered": 2, 00:20:44.455 "num_base_bdevs_operational": 2, 00:20:44.455 "base_bdevs_list": [ 00:20:44.455 { 00:20:44.455 "name": "spare", 00:20:44.456 "uuid": "d14e19a3-a322-58c7-bda6-57e2116aeff9", 00:20:44.456 "is_configured": true, 00:20:44.456 "data_offset": 0, 00:20:44.456 "data_size": 65536 00:20:44.456 }, 00:20:44.456 { 00:20:44.456 "name": "BaseBdev2", 00:20:44.456 "uuid": "0530cca7-179e-4ded-ac63-128e71036fc6", 00:20:44.456 "is_configured": true, 00:20:44.456 "data_offset": 0, 00:20:44.456 "data_size": 65536 00:20:44.456 } 00:20:44.456 ] 00:20:44.456 }' 00:20:44.456 10:34:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.456 10:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 10:34:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:45.282 [2024-07-12 10:34:38.985044] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.282 [2024-07-12 10:34:38.985073] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.282 [2024-07-12 10:34:38.985198] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.282 [2024-07-12 10:34:38.985286] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.282 [2024-07-12 10:34:38.985302] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:45.282 10:34:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.282 10:34:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:45.282 10:34:39 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:45.282 10:34:39 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:45.282 10:34:39 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@12 -- # local i 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:45.282 10:34:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:45.541 /dev/nbd0 00:20:45.541 10:34:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:45.541 10:34:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:45.541 10:34:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:45.541 10:34:39 -- common/autotest_common.sh@857 -- # local i 00:20:45.541 10:34:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:45.541 10:34:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:45.541 10:34:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:45.541 10:34:39 -- common/autotest_common.sh@861 -- # break 00:20:45.541 10:34:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:45.541 10:34:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:45.541 10:34:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:45.541 1+0 records in 00:20:45.541 1+0 records out 00:20:45.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249347 s, 16.4 MB/s 00:20:45.541 10:34:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.541 10:34:39 -- common/autotest_common.sh@874 -- # size=4096 00:20:45.541 10:34:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.541 10:34:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:45.541 10:34:39 -- common/autotest_common.sh@877 -- # return 0 00:20:45.541 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:45.541 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:45.541 10:34:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:45.799 /dev/nbd1 00:20:45.799 10:34:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:45.799 10:34:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:45.799 10:34:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:45.799 10:34:39 -- common/autotest_common.sh@857 -- # local i 00:20:45.799 10:34:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:45.799 10:34:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:45.799 10:34:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:45.799 10:34:39 -- common/autotest_common.sh@861 -- # break 00:20:45.799 10:34:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:45.799 10:34:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:45.800 10:34:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:45.800 1+0 records in 00:20:45.800 1+0 records out 00:20:45.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800357 s, 5.1 MB/s 00:20:45.800 10:34:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.063 10:34:39 -- common/autotest_common.sh@874 -- # size=4096 00:20:46.063 10:34:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.063 10:34:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:46.063 10:34:39 -- common/autotest_common.sh@877 -- # return 0 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:46.063 10:34:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:46.063 10:34:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@51 -- # local i 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.063 10:34:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@41 -- # break 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.338 10:34:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@41 -- # break 00:20:46.617 10:34:40 -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.617 10:34:40 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:46.617 10:34:40 -- bdev/bdev_raid.sh@709 -- # killprocess 125734 00:20:46.617 10:34:40 -- common/autotest_common.sh@926 -- # '[' -z 125734 ']' 00:20:46.617 10:34:40 -- common/autotest_common.sh@930 -- # kill -0 125734 00:20:46.617 10:34:40 -- common/autotest_common.sh@931 -- # uname 00:20:46.617 10:34:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.617 10:34:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125734 00:20:46.886 10:34:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:46.886 10:34:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:46.886 10:34:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125734' 00:20:46.886 killing process with pid 125734 00:20:46.886 10:34:40 -- common/autotest_common.sh@945 -- # kill 125734 00:20:46.886 10:34:40 -- common/autotest_common.sh@950 -- # wait 125734 00:20:46.886 Received shutdown signal, test time was about 60.000000 seconds 00:20:46.886 00:20:46.886 Latency(us) 00:20:46.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.886 =================================================================================================================== 00:20:46.886 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.886 [2024-07-12 10:34:40.537345] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.886 [2024-07-12 10:34:40.734397] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:48.264 ************************************ 00:20:48.264 END TEST raid_rebuild_test 00:20:48.264 ************************************ 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:48.264 00:20:48.264 real 0m21.546s 00:20:48.264 user 0m30.262s 00:20:48.264 sys 0m3.413s 00:20:48.264 10:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.264 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:48.264 10:34:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:48.264 10:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:48.264 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:20:48.264 ************************************ 00:20:48.264 START TEST raid_rebuild_test_sb 00:20:48.264 ************************************ 00:20:48.264 10:34:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:48.264 10:34:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=126306 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126306 /var/tmp/spdk-raid.sock 00:20:48.265 10:34:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:48.265 10:34:41 -- common/autotest_common.sh@819 -- # '[' -z 126306 ']' 00:20:48.265 10:34:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:48.265 10:34:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:48.265 10:34:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:48.265 10:34:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:48.265 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:20:48.265 [2024-07-12 10:34:41.892798] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:48.265 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:48.265 Zero copy mechanism will not be used. 00:20:48.265 [2024-07-12 10:34:41.892988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126306 ] 00:20:48.265 [2024-07-12 10:34:42.057858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.524 [2024-07-12 10:34:42.239988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.524 [2024-07-12 10:34:42.424615] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.091 10:34:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.091 10:34:42 -- common/autotest_common.sh@852 -- # return 0 00:20:49.091 10:34:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:49.091 10:34:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:49.091 10:34:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:49.091 BaseBdev1_malloc 00:20:49.091 10:34:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:49.348 [2024-07-12 10:34:43.169023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:49.348 [2024-07-12 10:34:43.169127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.348 [2024-07-12 10:34:43.169162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:49.348 [2024-07-12 10:34:43.169209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.348 [2024-07-12 10:34:43.171485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.348 [2024-07-12 10:34:43.171532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:49.348 BaseBdev1 00:20:49.348 10:34:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:49.348 10:34:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:49.348 10:34:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:49.606 BaseBdev2_malloc 00:20:49.606 10:34:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:49.864 [2024-07-12 10:34:43.630223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:49.864 [2024-07-12 10:34:43.630293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.864 [2024-07-12 10:34:43.630335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:49.864 [2024-07-12 10:34:43.630386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.864 [2024-07-12 10:34:43.632617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.864 [2024-07-12 10:34:43.632664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:49.864 BaseBdev2 00:20:49.864 10:34:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:50.122 spare_malloc 00:20:50.122 10:34:43 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:50.379 spare_delay 00:20:50.379 10:34:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:50.638 [2024-07-12 10:34:44.306930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.638 [2024-07-12 10:34:44.307013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.638 [2024-07-12 10:34:44.307054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:50.638 [2024-07-12 10:34:44.307096] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.638 [2024-07-12 10:34:44.309366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.638 [2024-07-12 10:34:44.309439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.638 spare 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:50.638 [2024-07-12 10:34:44.483029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.638 [2024-07-12 10:34:44.484892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:50.638 [2024-07-12 10:34:44.485124] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:50.638 [2024-07-12 10:34:44.485141] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:50.638 [2024-07-12 10:34:44.485264] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:50.638 [2024-07-12 10:34:44.485604] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:50.638 [2024-07-12 10:34:44.485634] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:50.638 [2024-07-12 10:34:44.485760] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.638 10:34:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.897 10:34:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.897 "name": "raid_bdev1", 00:20:50.897 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:20:50.897 "strip_size_kb": 0, 00:20:50.897 "state": "online", 00:20:50.897 "raid_level": "raid1", 00:20:50.897 "superblock": true, 00:20:50.897 "num_base_bdevs": 2, 00:20:50.897 "num_base_bdevs_discovered": 2, 00:20:50.897 "num_base_bdevs_operational": 2, 00:20:50.897 "base_bdevs_list": [ 00:20:50.897 { 00:20:50.897 "name": "BaseBdev1", 00:20:50.897 "uuid": "9811e7df-3858-5915-a24a-de5cb114c761", 00:20:50.897 "is_configured": true, 00:20:50.897 "data_offset": 2048, 00:20:50.897 "data_size": 63488 00:20:50.897 }, 00:20:50.897 { 00:20:50.897 "name": "BaseBdev2", 00:20:50.897 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:20:50.897 "is_configured": true, 00:20:50.897 "data_offset": 2048, 00:20:50.897 "data_size": 63488 00:20:50.897 } 00:20:50.897 ] 00:20:50.897 }' 00:20:50.897 10:34:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.897 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:20:51.465 10:34:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:51.465 10:34:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:51.723 [2024-07-12 10:34:45.487327] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.723 10:34:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:51.723 10:34:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.723 10:34:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:51.982 10:34:45 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:51.982 10:34:45 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:51.982 10:34:45 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:51.982 10:34:45 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@12 -- # local i 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:51.982 10:34:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:52.241 [2024-07-12 10:34:45.963250] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:52.241 /dev/nbd0 00:20:52.241 10:34:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.241 10:34:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.241 10:34:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:52.241 10:34:46 -- common/autotest_common.sh@857 -- # local i 00:20:52.241 10:34:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:52.241 10:34:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:52.241 10:34:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:52.241 10:34:46 -- common/autotest_common.sh@861 -- # break 00:20:52.241 10:34:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:52.241 10:34:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:52.241 10:34:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.241 1+0 records in 00:20:52.241 1+0 records out 00:20:52.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279814 s, 14.6 MB/s 00:20:52.241 10:34:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.241 10:34:46 -- common/autotest_common.sh@874 -- # size=4096 00:20:52.241 10:34:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.241 10:34:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:52.241 10:34:46 -- common/autotest_common.sh@877 -- # return 0 00:20:52.241 10:34:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.241 10:34:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.241 10:34:46 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:52.241 10:34:46 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:52.241 10:34:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:57.510 63488+0 records in 00:20:57.510 63488+0 records out 00:20:57.510 32505856 bytes (33 MB, 31 MiB) copied, 4.65275 s, 7.0 MB/s 00:20:57.510 10:34:50 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@51 -- # local i 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.510 10:34:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:57.510 [2024-07-12 10:34:50.948449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.510 10:34:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:57.510 10:34:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.510 10:34:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.510 10:34:51 -- bdev/nbd_common.sh@41 -- # break 00:20:57.510 10:34:51 -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:57.510 [2024-07-12 10:34:51.276008] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.510 10:34:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.768 10:34:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.768 "name": "raid_bdev1", 00:20:57.768 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:20:57.768 "strip_size_kb": 0, 00:20:57.768 "state": "online", 00:20:57.768 "raid_level": "raid1", 00:20:57.768 "superblock": true, 00:20:57.768 "num_base_bdevs": 2, 00:20:57.768 "num_base_bdevs_discovered": 1, 00:20:57.768 "num_base_bdevs_operational": 1, 00:20:57.768 "base_bdevs_list": [ 00:20:57.768 { 00:20:57.768 "name": null, 00:20:57.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.768 "is_configured": false, 00:20:57.768 "data_offset": 2048, 00:20:57.768 "data_size": 63488 00:20:57.768 }, 00:20:57.768 { 00:20:57.768 "name": "BaseBdev2", 00:20:57.768 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:20:57.768 "is_configured": true, 00:20:57.768 "data_offset": 2048, 00:20:57.768 "data_size": 63488 00:20:57.768 } 00:20:57.768 ] 00:20:57.768 }' 00:20:57.768 10:34:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.768 10:34:51 -- common/autotest_common.sh@10 -- # set +x 00:20:58.334 10:34:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.593 [2024-07-12 10:34:52.308169] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:58.593 [2024-07-12 10:34:52.308216] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.593 [2024-07-12 10:34:52.320879] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4950 00:20:58.593 [2024-07-12 10:34:52.322805] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.593 10:34:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.527 10:34:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.785 "name": "raid_bdev1", 00:20:59.785 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:20:59.785 "strip_size_kb": 0, 00:20:59.785 "state": "online", 00:20:59.785 "raid_level": "raid1", 00:20:59.785 "superblock": true, 00:20:59.785 "num_base_bdevs": 2, 00:20:59.785 "num_base_bdevs_discovered": 2, 00:20:59.785 "num_base_bdevs_operational": 2, 00:20:59.785 "process": { 00:20:59.785 "type": "rebuild", 00:20:59.785 "target": "spare", 00:20:59.785 "progress": { 00:20:59.785 "blocks": 24576, 00:20:59.785 "percent": 38 00:20:59.785 } 00:20:59.785 }, 00:20:59.785 "base_bdevs_list": [ 00:20:59.785 { 00:20:59.785 "name": "spare", 00:20:59.785 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:20:59.785 "is_configured": true, 00:20:59.785 "data_offset": 2048, 00:20:59.785 "data_size": 63488 00:20:59.785 }, 00:20:59.785 { 00:20:59.785 "name": "BaseBdev2", 00:20:59.785 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:20:59.785 "is_configured": true, 00:20:59.785 "data_offset": 2048, 00:20:59.785 "data_size": 63488 00:20:59.785 } 00:20:59.785 ] 00:20:59.785 }' 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.785 10:34:53 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:00.042 [2024-07-12 10:34:53.916291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.042 [2024-07-12 10:34:53.932804] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.042 [2024-07-12 10:34:53.932897] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.300 10:34:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.300 10:34:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.300 "name": "raid_bdev1", 00:21:00.300 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:00.300 "strip_size_kb": 0, 00:21:00.300 "state": "online", 00:21:00.300 "raid_level": "raid1", 00:21:00.300 "superblock": true, 00:21:00.300 "num_base_bdevs": 2, 00:21:00.300 "num_base_bdevs_discovered": 1, 00:21:00.300 "num_base_bdevs_operational": 1, 00:21:00.300 "base_bdevs_list": [ 00:21:00.300 { 00:21:00.300 "name": null, 00:21:00.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.300 "is_configured": false, 00:21:00.300 "data_offset": 2048, 00:21:00.300 "data_size": 63488 00:21:00.300 }, 00:21:00.300 { 00:21:00.300 "name": "BaseBdev2", 00:21:00.300 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:00.300 "is_configured": true, 00:21:00.300 "data_offset": 2048, 00:21:00.300 "data_size": 63488 00:21:00.300 } 00:21:00.300 ] 00:21:00.300 }' 00:21:00.300 10:34:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.300 10:34:54 -- common/autotest_common.sh@10 -- # set +x 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.233 10:34:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.233 10:34:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.233 "name": "raid_bdev1", 00:21:01.233 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:01.233 "strip_size_kb": 0, 00:21:01.233 "state": "online", 00:21:01.233 "raid_level": "raid1", 00:21:01.233 "superblock": true, 00:21:01.233 "num_base_bdevs": 2, 00:21:01.233 "num_base_bdevs_discovered": 1, 00:21:01.233 "num_base_bdevs_operational": 1, 00:21:01.233 "base_bdevs_list": [ 00:21:01.233 { 00:21:01.233 "name": null, 00:21:01.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.233 "is_configured": false, 00:21:01.234 "data_offset": 2048, 00:21:01.234 "data_size": 63488 00:21:01.234 }, 00:21:01.234 { 00:21:01.234 "name": "BaseBdev2", 00:21:01.234 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:01.234 "is_configured": true, 00:21:01.234 "data_offset": 2048, 00:21:01.234 "data_size": 63488 00:21:01.234 } 00:21:01.234 ] 00:21:01.234 }' 00:21:01.234 10:34:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.234 10:34:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:01.234 10:34:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.234 10:34:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:01.234 10:34:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.491 [2024-07-12 10:34:55.291529] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.491 [2024-07-12 10:34:55.291565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.491 [2024-07-12 10:34:55.301374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4af0 00:21:01.491 [2024-07-12 10:34:55.303333] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.491 10:34:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.426 10:34:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.686 10:34:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.686 "name": "raid_bdev1", 00:21:02.686 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:02.686 "strip_size_kb": 0, 00:21:02.686 "state": "online", 00:21:02.686 "raid_level": "raid1", 00:21:02.686 "superblock": true, 00:21:02.686 "num_base_bdevs": 2, 00:21:02.686 "num_base_bdevs_discovered": 2, 00:21:02.686 "num_base_bdevs_operational": 2, 00:21:02.686 "process": { 00:21:02.686 "type": "rebuild", 00:21:02.686 "target": "spare", 00:21:02.686 "progress": { 00:21:02.686 "blocks": 24576, 00:21:02.686 "percent": 38 00:21:02.686 } 00:21:02.686 }, 00:21:02.686 "base_bdevs_list": [ 00:21:02.686 { 00:21:02.686 "name": "spare", 00:21:02.686 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:02.686 "is_configured": true, 00:21:02.686 "data_offset": 2048, 00:21:02.686 "data_size": 63488 00:21:02.686 }, 00:21:02.686 { 00:21:02.686 "name": "BaseBdev2", 00:21:02.686 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:02.686 "is_configured": true, 00:21:02.686 "data_offset": 2048, 00:21:02.686 "data_size": 63488 00:21:02.686 } 00:21:02.686 ] 00:21:02.686 }' 00:21:02.686 10:34:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:02.945 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@657 -- # local timeout=409 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.945 10:34:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.203 10:34:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.203 "name": "raid_bdev1", 00:21:03.203 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:03.203 "strip_size_kb": 0, 00:21:03.203 "state": "online", 00:21:03.203 "raid_level": "raid1", 00:21:03.203 "superblock": true, 00:21:03.203 "num_base_bdevs": 2, 00:21:03.203 "num_base_bdevs_discovered": 2, 00:21:03.203 "num_base_bdevs_operational": 2, 00:21:03.203 "process": { 00:21:03.203 "type": "rebuild", 00:21:03.203 "target": "spare", 00:21:03.203 "progress": { 00:21:03.203 "blocks": 30720, 00:21:03.203 "percent": 48 00:21:03.203 } 00:21:03.203 }, 00:21:03.203 "base_bdevs_list": [ 00:21:03.203 { 00:21:03.203 "name": "spare", 00:21:03.203 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:03.203 "is_configured": true, 00:21:03.203 "data_offset": 2048, 00:21:03.203 "data_size": 63488 00:21:03.203 }, 00:21:03.203 { 00:21:03.203 "name": "BaseBdev2", 00:21:03.203 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:03.203 "is_configured": true, 00:21:03.203 "data_offset": 2048, 00:21:03.203 "data_size": 63488 00:21:03.203 } 00:21:03.203 ] 00:21:03.203 }' 00:21:03.203 10:34:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.203 10:34:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.203 10:34:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.203 10:34:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.203 10:34:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.140 10:34:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.398 10:34:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.398 "name": "raid_bdev1", 00:21:04.398 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:04.398 "strip_size_kb": 0, 00:21:04.398 "state": "online", 00:21:04.398 "raid_level": "raid1", 00:21:04.398 "superblock": true, 00:21:04.398 "num_base_bdevs": 2, 00:21:04.398 "num_base_bdevs_discovered": 2, 00:21:04.398 "num_base_bdevs_operational": 2, 00:21:04.398 "process": { 00:21:04.398 "type": "rebuild", 00:21:04.398 "target": "spare", 00:21:04.398 "progress": { 00:21:04.398 "blocks": 59392, 00:21:04.398 "percent": 93 00:21:04.398 } 00:21:04.398 }, 00:21:04.398 "base_bdevs_list": [ 00:21:04.398 { 00:21:04.398 "name": "spare", 00:21:04.398 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:04.398 "is_configured": true, 00:21:04.398 "data_offset": 2048, 00:21:04.398 "data_size": 63488 00:21:04.398 }, 00:21:04.398 { 00:21:04.398 "name": "BaseBdev2", 00:21:04.398 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:04.398 "is_configured": true, 00:21:04.398 "data_offset": 2048, 00:21:04.398 "data_size": 63488 00:21:04.398 } 00:21:04.399 ] 00:21:04.399 }' 00:21:04.399 10:34:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.657 10:34:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.657 10:34:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.657 10:34:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.657 10:34:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:04.657 [2024-07-12 10:34:58.420675] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:04.657 [2024-07-12 10:34:58.420761] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:04.657 [2024-07-12 10:34:58.420899] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.605 10:34:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.864 "name": "raid_bdev1", 00:21:05.864 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:05.864 "strip_size_kb": 0, 00:21:05.864 "state": "online", 00:21:05.864 "raid_level": "raid1", 00:21:05.864 "superblock": true, 00:21:05.864 "num_base_bdevs": 2, 00:21:05.864 "num_base_bdevs_discovered": 2, 00:21:05.864 "num_base_bdevs_operational": 2, 00:21:05.864 "base_bdevs_list": [ 00:21:05.864 { 00:21:05.864 "name": "spare", 00:21:05.864 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:05.864 "is_configured": true, 00:21:05.864 "data_offset": 2048, 00:21:05.864 "data_size": 63488 00:21:05.864 }, 00:21:05.864 { 00:21:05.864 "name": "BaseBdev2", 00:21:05.864 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:05.864 "is_configured": true, 00:21:05.864 "data_offset": 2048, 00:21:05.864 "data_size": 63488 00:21:05.864 } 00:21:05.864 ] 00:21:05.864 }' 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@660 -- # break 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.864 10:34:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.123 "name": "raid_bdev1", 00:21:06.123 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:06.123 "strip_size_kb": 0, 00:21:06.123 "state": "online", 00:21:06.123 "raid_level": "raid1", 00:21:06.123 "superblock": true, 00:21:06.123 "num_base_bdevs": 2, 00:21:06.123 "num_base_bdevs_discovered": 2, 00:21:06.123 "num_base_bdevs_operational": 2, 00:21:06.123 "base_bdevs_list": [ 00:21:06.123 { 00:21:06.123 "name": "spare", 00:21:06.123 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:06.123 "is_configured": true, 00:21:06.123 "data_offset": 2048, 00:21:06.123 "data_size": 63488 00:21:06.123 }, 00:21:06.123 { 00:21:06.123 "name": "BaseBdev2", 00:21:06.123 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:06.123 "is_configured": true, 00:21:06.123 "data_offset": 2048, 00:21:06.123 "data_size": 63488 00:21:06.123 } 00:21:06.123 ] 00:21:06.123 }' 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.123 10:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.382 10:35:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.382 "name": "raid_bdev1", 00:21:06.382 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:06.382 "strip_size_kb": 0, 00:21:06.382 "state": "online", 00:21:06.382 "raid_level": "raid1", 00:21:06.382 "superblock": true, 00:21:06.382 "num_base_bdevs": 2, 00:21:06.382 "num_base_bdevs_discovered": 2, 00:21:06.382 "num_base_bdevs_operational": 2, 00:21:06.382 "base_bdevs_list": [ 00:21:06.382 { 00:21:06.382 "name": "spare", 00:21:06.382 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:06.382 "is_configured": true, 00:21:06.382 "data_offset": 2048, 00:21:06.382 "data_size": 63488 00:21:06.382 }, 00:21:06.382 { 00:21:06.382 "name": "BaseBdev2", 00:21:06.382 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:06.382 "is_configured": true, 00:21:06.382 "data_offset": 2048, 00:21:06.382 "data_size": 63488 00:21:06.382 } 00:21:06.382 ] 00:21:06.382 }' 00:21:06.382 10:35:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.382 10:35:00 -- common/autotest_common.sh@10 -- # set +x 00:21:07.318 10:35:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:07.318 [2024-07-12 10:35:01.159956] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.318 [2024-07-12 10:35:01.159982] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.318 [2024-07-12 10:35:01.160063] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.318 [2024-07-12 10:35:01.160127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.318 [2024-07-12 10:35:01.160138] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:07.318 10:35:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.318 10:35:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:07.576 10:35:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:07.576 10:35:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:07.576 10:35:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@12 -- # local i 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:07.576 10:35:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:07.834 /dev/nbd0 00:21:07.834 10:35:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.834 10:35:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.834 10:35:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:07.834 10:35:01 -- common/autotest_common.sh@857 -- # local i 00:21:07.834 10:35:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:07.834 10:35:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:07.834 10:35:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:07.835 10:35:01 -- common/autotest_common.sh@861 -- # break 00:21:07.835 10:35:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:07.835 10:35:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:07.835 10:35:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.835 1+0 records in 00:21:07.835 1+0 records out 00:21:07.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389263 s, 10.5 MB/s 00:21:07.835 10:35:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.835 10:35:01 -- common/autotest_common.sh@874 -- # size=4096 00:21:07.835 10:35:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.835 10:35:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:07.835 10:35:01 -- common/autotest_common.sh@877 -- # return 0 00:21:07.835 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.835 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:07.835 10:35:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:08.093 /dev/nbd1 00:21:08.093 10:35:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:08.093 10:35:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:08.093 10:35:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:08.093 10:35:01 -- common/autotest_common.sh@857 -- # local i 00:21:08.093 10:35:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:08.093 10:35:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:08.093 10:35:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:08.093 10:35:01 -- common/autotest_common.sh@861 -- # break 00:21:08.093 10:35:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:08.093 10:35:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:08.093 10:35:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.093 1+0 records in 00:21:08.093 1+0 records out 00:21:08.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555888 s, 7.4 MB/s 00:21:08.093 10:35:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.093 10:35:01 -- common/autotest_common.sh@874 -- # size=4096 00:21:08.093 10:35:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.093 10:35:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:08.093 10:35:01 -- common/autotest_common.sh@877 -- # return 0 00:21:08.093 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.093 10:35:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:08.093 10:35:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:08.352 10:35:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@51 -- # local i 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.352 10:35:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@41 -- # break 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.610 10:35:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@41 -- # break 00:21:08.868 10:35:02 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.868 10:35:02 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:08.868 10:35:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:08.868 10:35:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:08.868 10:35:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:09.127 10:35:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:09.386 [2024-07-12 10:35:03.240703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:09.386 [2024-07-12 10:35:03.240799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.386 [2024-07-12 10:35:03.240834] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:09.386 [2024-07-12 10:35:03.240861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.386 [2024-07-12 10:35:03.243083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.386 [2024-07-12 10:35:03.243151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:09.386 [2024-07-12 10:35:03.243243] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:09.386 [2024-07-12 10:35:03.243306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.386 BaseBdev1 00:21:09.386 10:35:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:09.386 10:35:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:09.386 10:35:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:09.644 10:35:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:09.902 [2024-07-12 10:35:03.604756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:09.902 [2024-07-12 10:35:03.604819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.902 [2024-07-12 10:35:03.604848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:09.902 [2024-07-12 10:35:03.604874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.902 [2024-07-12 10:35:03.605230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.902 [2024-07-12 10:35:03.605289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:09.902 [2024-07-12 10:35:03.605371] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:09.902 [2024-07-12 10:35:03.605385] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:09.902 [2024-07-12 10:35:03.605392] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:09.902 [2024-07-12 10:35:03.605408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:21:09.902 [2024-07-12 10:35:03.605465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.902 BaseBdev2 00:21:09.902 10:35:03 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:09.902 10:35:03 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:10.160 [2024-07-12 10:35:03.968911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:10.160 [2024-07-12 10:35:03.969013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.160 [2024-07-12 10:35:03.969067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:10.160 [2024-07-12 10:35:03.969096] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.160 [2024-07-12 10:35:03.969764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.160 [2024-07-12 10:35:03.969835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:10.160 [2024-07-12 10:35:03.969983] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:10.160 [2024-07-12 10:35:03.970034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.160 spare 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.160 10:35:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.160 [2024-07-12 10:35:04.070176] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:10.160 [2024-07-12 10:35:04.070201] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.160 [2024-07-12 10:35:04.070326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5630 00:21:10.160 [2024-07-12 10:35:04.070749] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:10.160 [2024-07-12 10:35:04.070777] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:10.160 [2024-07-12 10:35:04.070910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.419 10:35:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.419 "name": "raid_bdev1", 00:21:10.419 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:10.419 "strip_size_kb": 0, 00:21:10.419 "state": "online", 00:21:10.419 "raid_level": "raid1", 00:21:10.419 "superblock": true, 00:21:10.419 "num_base_bdevs": 2, 00:21:10.419 "num_base_bdevs_discovered": 2, 00:21:10.419 "num_base_bdevs_operational": 2, 00:21:10.419 "base_bdevs_list": [ 00:21:10.419 { 00:21:10.419 "name": "spare", 00:21:10.419 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:10.419 "is_configured": true, 00:21:10.419 "data_offset": 2048, 00:21:10.419 "data_size": 63488 00:21:10.419 }, 00:21:10.419 { 00:21:10.419 "name": "BaseBdev2", 00:21:10.419 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:10.419 "is_configured": true, 00:21:10.419 "data_offset": 2048, 00:21:10.419 "data_size": 63488 00:21:10.419 } 00:21:10.419 ] 00:21:10.419 }' 00:21:10.419 10:35:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.419 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.986 10:35:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.245 "name": "raid_bdev1", 00:21:11.245 "uuid": "aae6228b-e969-49da-93b8-6d38d734a436", 00:21:11.245 "strip_size_kb": 0, 00:21:11.245 "state": "online", 00:21:11.245 "raid_level": "raid1", 00:21:11.245 "superblock": true, 00:21:11.245 "num_base_bdevs": 2, 00:21:11.245 "num_base_bdevs_discovered": 2, 00:21:11.245 "num_base_bdevs_operational": 2, 00:21:11.245 "base_bdevs_list": [ 00:21:11.245 { 00:21:11.245 "name": "spare", 00:21:11.245 "uuid": "3d5a9d84-6b97-55c8-9731-37e5dbc3f520", 00:21:11.245 "is_configured": true, 00:21:11.245 "data_offset": 2048, 00:21:11.245 "data_size": 63488 00:21:11.245 }, 00:21:11.245 { 00:21:11.245 "name": "BaseBdev2", 00:21:11.245 "uuid": "6d6c26c9-2d73-5a6a-be26-f542dfbb0c3e", 00:21:11.245 "is_configured": true, 00:21:11.245 "data_offset": 2048, 00:21:11.245 "data_size": 63488 00:21:11.245 } 00:21:11.245 ] 00:21:11.245 }' 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.245 10:35:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:11.504 10:35:05 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.504 10:35:05 -- bdev/bdev_raid.sh@709 -- # killprocess 126306 00:21:11.504 10:35:05 -- common/autotest_common.sh@926 -- # '[' -z 126306 ']' 00:21:11.504 10:35:05 -- common/autotest_common.sh@930 -- # kill -0 126306 00:21:11.504 10:35:05 -- common/autotest_common.sh@931 -- # uname 00:21:11.504 10:35:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.504 10:35:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126306 00:21:11.504 killing process with pid 126306 00:21:11.504 Received shutdown signal, test time was about 60.000000 seconds 00:21:11.504 00:21:11.504 Latency(us) 00:21:11.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.504 =================================================================================================================== 00:21:11.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:11.504 10:35:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:11.504 10:35:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:11.504 10:35:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126306' 00:21:11.504 10:35:05 -- common/autotest_common.sh@945 -- # kill 126306 00:21:11.504 10:35:05 -- common/autotest_common.sh@950 -- # wait 126306 00:21:11.504 [2024-07-12 10:35:05.357553] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.504 [2024-07-12 10:35:05.357629] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.504 [2024-07-12 10:35:05.357737] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.504 [2024-07-12 10:35:05.357758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:11.762 [2024-07-12 10:35:05.547824] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.697 ************************************ 00:21:12.697 END TEST raid_rebuild_test_sb 00:21:12.697 ************************************ 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:12.697 00:21:12.697 real 0m24.657s 00:21:12.697 user 0m35.592s 00:21:12.697 sys 0m4.086s 00:21:12.697 10:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.697 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:12.697 10:35:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:12.697 10:35:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:12.697 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 ************************************ 00:21:12.697 START TEST raid_rebuild_test_io 00:21:12.697 ************************************ 00:21:12.697 10:35:06 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=126975 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126975 /var/tmp/spdk-raid.sock 00:21:12.697 10:35:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.697 10:35:06 -- common/autotest_common.sh@819 -- # '[' -z 126975 ']' 00:21:12.697 10:35:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:12.697 10:35:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.697 10:35:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:12.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:12.697 10:35:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.697 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 [2024-07-12 10:35:06.592994] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:12.697 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:12.697 Zero copy mechanism will not be used. 00:21:12.697 [2024-07-12 10:35:06.593149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126975 ] 00:21:12.955 [2024-07-12 10:35:06.743910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.213 [2024-07-12 10:35:06.899303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.213 [2024-07-12 10:35:07.064174] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.780 10:35:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:13.780 10:35:07 -- common/autotest_common.sh@852 -- # return 0 00:21:13.780 10:35:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:13.780 10:35:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:13.780 10:35:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:14.038 BaseBdev1 00:21:14.038 10:35:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:14.038 10:35:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:14.038 10:35:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:14.296 BaseBdev2 00:21:14.296 10:35:07 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:14.555 spare_malloc 00:21:14.555 10:35:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:14.555 spare_delay 00:21:14.555 10:35:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:14.813 [2024-07-12 10:35:08.702742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.814 [2024-07-12 10:35:08.702840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.814 [2024-07-12 10:35:08.702872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:14.814 [2024-07-12 10:35:08.702914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.814 [2024-07-12 10:35:08.705154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.814 [2024-07-12 10:35:08.705202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.814 spare 00:21:14.814 10:35:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:15.072 [2024-07-12 10:35:08.886803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.072 [2024-07-12 10:35:08.888476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.072 [2024-07-12 10:35:08.888558] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:15.072 [2024-07-12 10:35:08.888570] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:15.072 [2024-07-12 10:35:08.888691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:15.072 [2024-07-12 10:35:08.889048] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:15.072 [2024-07-12 10:35:08.889073] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:15.072 [2024-07-12 10:35:08.889224] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.072 10:35:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.330 10:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:15.330 "name": "raid_bdev1", 00:21:15.330 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:15.330 "strip_size_kb": 0, 00:21:15.330 "state": "online", 00:21:15.330 "raid_level": "raid1", 00:21:15.330 "superblock": false, 00:21:15.330 "num_base_bdevs": 2, 00:21:15.330 "num_base_bdevs_discovered": 2, 00:21:15.330 "num_base_bdevs_operational": 2, 00:21:15.330 "base_bdevs_list": [ 00:21:15.330 { 00:21:15.330 "name": "BaseBdev1", 00:21:15.330 "uuid": "53466a75-206e-4f79-bc48-18cd2a422330", 00:21:15.330 "is_configured": true, 00:21:15.330 "data_offset": 0, 00:21:15.330 "data_size": 65536 00:21:15.330 }, 00:21:15.330 { 00:21:15.330 "name": "BaseBdev2", 00:21:15.330 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:15.330 "is_configured": true, 00:21:15.330 "data_offset": 0, 00:21:15.330 "data_size": 65536 00:21:15.330 } 00:21:15.330 ] 00:21:15.330 }' 00:21:15.330 10:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:15.330 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.264 10:35:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:16.264 10:35:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:16.264 [2024-07-12 10:35:10.055226] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:16.264 10:35:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:16.264 10:35:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.264 10:35:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:16.522 10:35:10 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:16.522 10:35:10 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:16.522 10:35:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:16.522 10:35:10 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:16.522 [2024-07-12 10:35:10.321614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:16.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:16.522 Zero copy mechanism will not be used. 00:21:16.522 Running I/O for 60 seconds... 00:21:16.522 [2024-07-12 10:35:10.420828] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.522 [2024-07-12 10:35:10.427043] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.779 10:35:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.036 10:35:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.036 "name": "raid_bdev1", 00:21:17.036 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:17.036 "strip_size_kb": 0, 00:21:17.036 "state": "online", 00:21:17.036 "raid_level": "raid1", 00:21:17.036 "superblock": false, 00:21:17.036 "num_base_bdevs": 2, 00:21:17.036 "num_base_bdevs_discovered": 1, 00:21:17.036 "num_base_bdevs_operational": 1, 00:21:17.036 "base_bdevs_list": [ 00:21:17.036 { 00:21:17.036 "name": null, 00:21:17.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.036 "is_configured": false, 00:21:17.036 "data_offset": 0, 00:21:17.036 "data_size": 65536 00:21:17.036 }, 00:21:17.036 { 00:21:17.036 "name": "BaseBdev2", 00:21:17.036 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:17.036 "is_configured": true, 00:21:17.036 "data_offset": 0, 00:21:17.036 "data_size": 65536 00:21:17.036 } 00:21:17.036 ] 00:21:17.036 }' 00:21:17.036 10:35:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.036 10:35:10 -- common/autotest_common.sh@10 -- # set +x 00:21:17.600 10:35:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:17.858 [2024-07-12 10:35:11.687962] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:17.858 [2024-07-12 10:35:11.688010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.858 [2024-07-12 10:35:11.727256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:17.858 10:35:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:17.858 [2024-07-12 10:35:11.729531] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.117 [2024-07-12 10:35:11.850894] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:18.117 [2024-07-12 10:35:11.851247] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:18.376 [2024-07-12 10:35:12.058690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.376 [2024-07-12 10:35:12.058946] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.645 [2024-07-12 10:35:12.530554] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.941 10:35:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.941 [2024-07-12 10:35:12.757881] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:19.232 [2024-07-12 10:35:12.892509] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:19.232 10:35:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.232 "name": "raid_bdev1", 00:21:19.232 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:19.232 "strip_size_kb": 0, 00:21:19.232 "state": "online", 00:21:19.232 "raid_level": "raid1", 00:21:19.232 "superblock": false, 00:21:19.232 "num_base_bdevs": 2, 00:21:19.232 "num_base_bdevs_discovered": 2, 00:21:19.232 "num_base_bdevs_operational": 2, 00:21:19.232 "process": { 00:21:19.232 "type": "rebuild", 00:21:19.232 "target": "spare", 00:21:19.232 "progress": { 00:21:19.232 "blocks": 16384, 00:21:19.232 "percent": 25 00:21:19.232 } 00:21:19.232 }, 00:21:19.232 "base_bdevs_list": [ 00:21:19.232 { 00:21:19.232 "name": "spare", 00:21:19.232 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:19.232 "is_configured": true, 00:21:19.232 "data_offset": 0, 00:21:19.232 "data_size": 65536 00:21:19.232 }, 00:21:19.232 { 00:21:19.232 "name": "BaseBdev2", 00:21:19.232 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:19.232 "is_configured": true, 00:21:19.232 "data_offset": 0, 00:21:19.232 "data_size": 65536 00:21:19.232 } 00:21:19.232 ] 00:21:19.232 }' 00:21:19.232 10:35:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:19.232 10:35:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.232 10:35:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:19.232 10:35:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.232 10:35:13 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:19.504 [2024-07-12 10:35:13.198937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.504 [2024-07-12 10:35:13.234740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:19.504 [2024-07-12 10:35:13.235256] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:19.504 [2024-07-12 10:35:13.341819] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:19.504 [2024-07-12 10:35:13.349727] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.504 [2024-07-12 10:35:13.389005] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.504 10:35:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.762 10:35:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.762 "name": "raid_bdev1", 00:21:19.762 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:19.762 "strip_size_kb": 0, 00:21:19.762 "state": "online", 00:21:19.762 "raid_level": "raid1", 00:21:19.762 "superblock": false, 00:21:19.762 "num_base_bdevs": 2, 00:21:19.762 "num_base_bdevs_discovered": 1, 00:21:19.762 "num_base_bdevs_operational": 1, 00:21:19.762 "base_bdevs_list": [ 00:21:19.762 { 00:21:19.762 "name": null, 00:21:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.762 "is_configured": false, 00:21:19.762 "data_offset": 0, 00:21:19.762 "data_size": 65536 00:21:19.762 }, 00:21:19.762 { 00:21:19.762 "name": "BaseBdev2", 00:21:19.762 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:19.762 "is_configured": true, 00:21:19.762 "data_offset": 0, 00:21:19.762 "data_size": 65536 00:21:19.762 } 00:21:19.762 ] 00:21:19.762 }' 00:21:19.762 10:35:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.762 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.694 "name": "raid_bdev1", 00:21:20.694 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:20.694 "strip_size_kb": 0, 00:21:20.694 "state": "online", 00:21:20.694 "raid_level": "raid1", 00:21:20.694 "superblock": false, 00:21:20.694 "num_base_bdevs": 2, 00:21:20.694 "num_base_bdevs_discovered": 1, 00:21:20.694 "num_base_bdevs_operational": 1, 00:21:20.694 "base_bdevs_list": [ 00:21:20.694 { 00:21:20.694 "name": null, 00:21:20.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.694 "is_configured": false, 00:21:20.694 "data_offset": 0, 00:21:20.694 "data_size": 65536 00:21:20.694 }, 00:21:20.694 { 00:21:20.694 "name": "BaseBdev2", 00:21:20.694 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:20.694 "is_configured": true, 00:21:20.694 "data_offset": 0, 00:21:20.694 "data_size": 65536 00:21:20.694 } 00:21:20.694 ] 00:21:20.694 }' 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:20.694 10:35:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.953 [2024-07-12 10:35:14.772309] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:20.953 [2024-07-12 10:35:14.772364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.953 10:35:14 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:20.953 [2024-07-12 10:35:14.805744] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:20.953 [2024-07-12 10:35:14.807648] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.211 [2024-07-12 10:35:14.927496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:21.211 [2024-07-12 10:35:14.927894] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:21.469 [2024-07-12 10:35:15.146896] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:21.469 [2024-07-12 10:35:15.147043] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:21.728 [2024-07-12 10:35:15.590967] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:21.728 [2024-07-12 10:35:15.591195] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.986 10:35:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.244 "name": "raid_bdev1", 00:21:22.244 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:22.244 "strip_size_kb": 0, 00:21:22.244 "state": "online", 00:21:22.244 "raid_level": "raid1", 00:21:22.244 "superblock": false, 00:21:22.244 "num_base_bdevs": 2, 00:21:22.244 "num_base_bdevs_discovered": 2, 00:21:22.244 "num_base_bdevs_operational": 2, 00:21:22.244 "process": { 00:21:22.244 "type": "rebuild", 00:21:22.244 "target": "spare", 00:21:22.244 "progress": { 00:21:22.244 "blocks": 16384, 00:21:22.244 "percent": 25 00:21:22.244 } 00:21:22.244 }, 00:21:22.244 "base_bdevs_list": [ 00:21:22.244 { 00:21:22.244 "name": "spare", 00:21:22.244 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:22.244 "is_configured": true, 00:21:22.244 "data_offset": 0, 00:21:22.244 "data_size": 65536 00:21:22.244 }, 00:21:22.244 { 00:21:22.244 "name": "BaseBdev2", 00:21:22.244 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:22.244 "is_configured": true, 00:21:22.244 "data_offset": 0, 00:21:22.244 "data_size": 65536 00:21:22.244 } 00:21:22.244 ] 00:21:22.244 }' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@657 -- # local timeout=429 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.244 10:35:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.503 [2024-07-12 10:35:16.161777] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:22.503 [2024-07-12 10:35:16.162194] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:22.503 10:35:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.503 "name": "raid_bdev1", 00:21:22.503 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:22.503 "strip_size_kb": 0, 00:21:22.503 "state": "online", 00:21:22.503 "raid_level": "raid1", 00:21:22.503 "superblock": false, 00:21:22.503 "num_base_bdevs": 2, 00:21:22.503 "num_base_bdevs_discovered": 2, 00:21:22.503 "num_base_bdevs_operational": 2, 00:21:22.503 "process": { 00:21:22.503 "type": "rebuild", 00:21:22.503 "target": "spare", 00:21:22.503 "progress": { 00:21:22.503 "blocks": 20480, 00:21:22.503 "percent": 31 00:21:22.503 } 00:21:22.503 }, 00:21:22.503 "base_bdevs_list": [ 00:21:22.503 { 00:21:22.503 "name": "spare", 00:21:22.503 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:22.503 "is_configured": true, 00:21:22.503 "data_offset": 0, 00:21:22.503 "data_size": 65536 00:21:22.503 }, 00:21:22.503 { 00:21:22.503 "name": "BaseBdev2", 00:21:22.503 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:22.503 "is_configured": true, 00:21:22.503 "data_offset": 0, 00:21:22.503 "data_size": 65536 00:21:22.504 } 00:21:22.504 ] 00:21:22.504 }' 00:21:22.504 10:35:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.504 [2024-07-12 10:35:16.365143] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:22.504 [2024-07-12 10:35:16.365423] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:22.504 10:35:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.504 10:35:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.762 10:35:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.762 10:35:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:22.762 [2024-07-12 10:35:16.602233] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:22.762 [2024-07-12 10:35:16.602647] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:23.020 [2024-07-12 10:35:16.811396] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:23.277 [2024-07-12 10:35:17.032033] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.844 [2024-07-12 10:35:17.472730] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.844 "name": "raid_bdev1", 00:21:23.844 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:23.844 "strip_size_kb": 0, 00:21:23.844 "state": "online", 00:21:23.844 "raid_level": "raid1", 00:21:23.844 "superblock": false, 00:21:23.844 "num_base_bdevs": 2, 00:21:23.844 "num_base_bdevs_discovered": 2, 00:21:23.844 "num_base_bdevs_operational": 2, 00:21:23.844 "process": { 00:21:23.844 "type": "rebuild", 00:21:23.844 "target": "spare", 00:21:23.844 "progress": { 00:21:23.844 "blocks": 38912, 00:21:23.844 "percent": 59 00:21:23.844 } 00:21:23.844 }, 00:21:23.844 "base_bdevs_list": [ 00:21:23.844 { 00:21:23.844 "name": "spare", 00:21:23.844 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:23.844 "is_configured": true, 00:21:23.844 "data_offset": 0, 00:21:23.844 "data_size": 65536 00:21:23.844 }, 00:21:23.844 { 00:21:23.844 "name": "BaseBdev2", 00:21:23.844 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:23.844 "is_configured": true, 00:21:23.844 "data_offset": 0, 00:21:23.844 "data_size": 65536 00:21:23.844 } 00:21:23.844 ] 00:21:23.844 }' 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.844 10:35:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.102 10:35:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.102 10:35:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:24.102 [2024-07-12 10:35:17.909584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:24.668 [2024-07-12 10:35:18.564737] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:24.668 [2024-07-12 10:35:18.565142] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.927 10:35:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.927 [2024-07-12 10:35:18.779679] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:25.186 10:35:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.186 "name": "raid_bdev1", 00:21:25.186 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:25.186 "strip_size_kb": 0, 00:21:25.186 "state": "online", 00:21:25.186 "raid_level": "raid1", 00:21:25.186 "superblock": false, 00:21:25.186 "num_base_bdevs": 2, 00:21:25.186 "num_base_bdevs_discovered": 2, 00:21:25.186 "num_base_bdevs_operational": 2, 00:21:25.186 "process": { 00:21:25.186 "type": "rebuild", 00:21:25.186 "target": "spare", 00:21:25.186 "progress": { 00:21:25.186 "blocks": 61440, 00:21:25.186 "percent": 93 00:21:25.186 } 00:21:25.186 }, 00:21:25.186 "base_bdevs_list": [ 00:21:25.186 { 00:21:25.186 "name": "spare", 00:21:25.186 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:25.186 "is_configured": true, 00:21:25.186 "data_offset": 0, 00:21:25.186 "data_size": 65536 00:21:25.186 }, 00:21:25.186 { 00:21:25.186 "name": "BaseBdev2", 00:21:25.186 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:25.186 "is_configured": true, 00:21:25.186 "data_offset": 0, 00:21:25.186 "data_size": 65536 00:21:25.186 } 00:21:25.186 ] 00:21:25.186 }' 00:21:25.186 10:35:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.186 10:35:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.186 10:35:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.445 [2024-07-12 10:35:19.110317] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.445 10:35:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.445 10:35:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:25.445 [2024-07-12 10:35:19.216044] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.445 [2024-07-12 10:35:19.217554] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.380 10:35:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.637 "name": "raid_bdev1", 00:21:26.637 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:26.637 "strip_size_kb": 0, 00:21:26.637 "state": "online", 00:21:26.637 "raid_level": "raid1", 00:21:26.637 "superblock": false, 00:21:26.637 "num_base_bdevs": 2, 00:21:26.637 "num_base_bdevs_discovered": 2, 00:21:26.637 "num_base_bdevs_operational": 2, 00:21:26.637 "base_bdevs_list": [ 00:21:26.637 { 00:21:26.637 "name": "spare", 00:21:26.637 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:26.637 "is_configured": true, 00:21:26.637 "data_offset": 0, 00:21:26.637 "data_size": 65536 00:21:26.637 }, 00:21:26.637 { 00:21:26.637 "name": "BaseBdev2", 00:21:26.637 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:26.637 "is_configured": true, 00:21:26.637 "data_offset": 0, 00:21:26.637 "data_size": 65536 00:21:26.637 } 00:21:26.637 ] 00:21:26.637 }' 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@660 -- # break 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.637 10:35:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.894 10:35:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.894 "name": "raid_bdev1", 00:21:26.894 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:26.894 "strip_size_kb": 0, 00:21:26.894 "state": "online", 00:21:26.894 "raid_level": "raid1", 00:21:26.894 "superblock": false, 00:21:26.895 "num_base_bdevs": 2, 00:21:26.895 "num_base_bdevs_discovered": 2, 00:21:26.895 "num_base_bdevs_operational": 2, 00:21:26.895 "base_bdevs_list": [ 00:21:26.895 { 00:21:26.895 "name": "spare", 00:21:26.895 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:26.895 "is_configured": true, 00:21:26.895 "data_offset": 0, 00:21:26.895 "data_size": 65536 00:21:26.895 }, 00:21:26.895 { 00:21:26.895 "name": "BaseBdev2", 00:21:26.895 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:26.895 "is_configured": true, 00:21:26.895 "data_offset": 0, 00:21:26.895 "data_size": 65536 00:21:26.895 } 00:21:26.895 ] 00:21:26.895 }' 00:21:26.895 10:35:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.152 10:35:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.410 10:35:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.410 "name": "raid_bdev1", 00:21:27.410 "uuid": "d504bae7-27ae-457c-bc90-707c2ca74fba", 00:21:27.410 "strip_size_kb": 0, 00:21:27.410 "state": "online", 00:21:27.410 "raid_level": "raid1", 00:21:27.410 "superblock": false, 00:21:27.410 "num_base_bdevs": 2, 00:21:27.410 "num_base_bdevs_discovered": 2, 00:21:27.410 "num_base_bdevs_operational": 2, 00:21:27.410 "base_bdevs_list": [ 00:21:27.410 { 00:21:27.410 "name": "spare", 00:21:27.410 "uuid": "b169fe86-8a66-5582-b2b6-caeb37d54f6e", 00:21:27.410 "is_configured": true, 00:21:27.410 "data_offset": 0, 00:21:27.410 "data_size": 65536 00:21:27.410 }, 00:21:27.410 { 00:21:27.410 "name": "BaseBdev2", 00:21:27.410 "uuid": "4f593a5d-9653-4e7d-ae7c-537761366cd5", 00:21:27.410 "is_configured": true, 00:21:27.410 "data_offset": 0, 00:21:27.410 "data_size": 65536 00:21:27.410 } 00:21:27.410 ] 00:21:27.410 }' 00:21:27.410 10:35:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.410 10:35:21 -- common/autotest_common.sh@10 -- # set +x 00:21:27.978 10:35:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:28.237 [2024-07-12 10:35:21.973263] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.237 [2024-07-12 10:35:21.973304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.237 00:21:28.237 Latency(us) 00:21:28.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.237 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:28.237 raid_bdev1 : 11.71 110.54 331.63 0.00 0.00 12733.74 303.48 113436.86 00:21:28.237 =================================================================================================================== 00:21:28.237 Total : 110.54 331.63 0.00 0.00 12733.74 303.48 113436.86 00:21:28.237 [2024-07-12 10:35:22.044076] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.237 [2024-07-12 10:35:22.044119] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.237 [2024-07-12 10:35:22.044194] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.237 [2024-07-12 10:35:22.044208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:28.237 0 00:21:28.237 10:35:22 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.237 10:35:22 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:28.496 10:35:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:28.496 10:35:22 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:28.496 10:35:22 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@12 -- # local i 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.496 10:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:28.755 /dev/nbd0 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.755 10:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:28.755 10:35:22 -- common/autotest_common.sh@857 -- # local i 00:21:28.755 10:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:28.755 10:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:28.755 10:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:28.755 10:35:22 -- common/autotest_common.sh@861 -- # break 00:21:28.755 10:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:28.755 10:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:28.755 10:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.755 1+0 records in 00:21:28.755 1+0 records out 00:21:28.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346037 s, 11.8 MB/s 00:21:28.755 10:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.755 10:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:21:28.755 10:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.755 10:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:28.755 10:35:22 -- common/autotest_common.sh@877 -- # return 0 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.755 10:35:22 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:28.755 10:35:22 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:28.755 10:35:22 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@12 -- # local i 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.755 10:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:29.013 /dev/nbd1 00:21:29.013 10:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:29.013 10:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:29.013 10:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:29.013 10:35:22 -- common/autotest_common.sh@857 -- # local i 00:21:29.013 10:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:29.013 10:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:29.013 10:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:29.013 10:35:22 -- common/autotest_common.sh@861 -- # break 00:21:29.013 10:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:29.013 10:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:29.013 10:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.013 1+0 records in 00:21:29.013 1+0 records out 00:21:29.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879917 s, 4.7 MB/s 00:21:29.013 10:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.013 10:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:21:29.013 10:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.013 10:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:29.013 10:35:22 -- common/autotest_common.sh@877 -- # return 0 00:21:29.013 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.013 10:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.013 10:35:22 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:29.272 10:35:22 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@51 -- # local i 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.272 10:35:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@41 -- # break 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.530 10:35:23 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@51 -- # local i 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.530 10:35:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@41 -- # break 00:21:29.788 10:35:23 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.788 10:35:23 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:29.788 10:35:23 -- bdev/bdev_raid.sh@709 -- # killprocess 126975 00:21:29.788 10:35:23 -- common/autotest_common.sh@926 -- # '[' -z 126975 ']' 00:21:29.788 10:35:23 -- common/autotest_common.sh@930 -- # kill -0 126975 00:21:29.788 10:35:23 -- common/autotest_common.sh@931 -- # uname 00:21:29.788 10:35:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.788 10:35:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126975 00:21:29.788 killing process with pid 126975 00:21:29.788 Received shutdown signal, test time was about 13.365329 seconds 00:21:29.788 00:21:29.788 Latency(us) 00:21:29.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.788 =================================================================================================================== 00:21:29.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.788 10:35:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.788 10:35:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.788 10:35:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126975' 00:21:29.788 10:35:23 -- common/autotest_common.sh@945 -- # kill 126975 00:21:29.788 10:35:23 -- common/autotest_common.sh@950 -- # wait 126975 00:21:29.788 [2024-07-12 10:35:23.688829] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.045 [2024-07-12 10:35:23.838703] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.979 ************************************ 00:21:30.979 END TEST raid_rebuild_test_io 00:21:30.979 ************************************ 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:30.979 00:21:30.979 real 0m18.262s 00:21:30.979 user 0m28.014s 00:21:30.979 sys 0m1.802s 00:21:30.979 10:35:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.979 10:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:30.979 10:35:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:30.979 10:35:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:30.979 10:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:30.979 ************************************ 00:21:30.979 START TEST raid_rebuild_test_sb_io 00:21:30.979 ************************************ 00:21:30.979 10:35:24 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=127514 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127514 /var/tmp/spdk-raid.sock 00:21:30.979 10:35:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:30.979 10:35:24 -- common/autotest_common.sh@819 -- # '[' -z 127514 ']' 00:21:30.979 10:35:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.979 10:35:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:30.979 10:35:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.979 10:35:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:30.979 10:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:31.238 [2024-07-12 10:35:24.924544] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:31.238 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.238 Zero copy mechanism will not be used. 00:21:31.238 [2024-07-12 10:35:24.924758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127514 ] 00:21:31.238 [2024-07-12 10:35:25.091387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.497 [2024-07-12 10:35:25.247723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.497 [2024-07-12 10:35:25.411784] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.065 10:35:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.065 10:35:25 -- common/autotest_common.sh@852 -- # return 0 00:21:32.065 10:35:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.065 10:35:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.065 10:35:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.323 BaseBdev1_malloc 00:21:32.323 10:35:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:32.581 [2024-07-12 10:35:26.250553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:32.581 [2024-07-12 10:35:26.250651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.581 [2024-07-12 10:35:26.250685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:32.581 [2024-07-12 10:35:26.250729] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.581 [2024-07-12 10:35:26.252960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.581 [2024-07-12 10:35:26.253007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:32.581 BaseBdev1 00:21:32.581 10:35:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.581 10:35:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.581 10:35:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:32.581 BaseBdev2_malloc 00:21:32.581 10:35:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:32.840 [2024-07-12 10:35:26.653487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:32.840 [2024-07-12 10:35:26.653568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.840 [2024-07-12 10:35:26.653609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:32.840 [2024-07-12 10:35:26.653658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.840 [2024-07-12 10:35:26.655981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.840 [2024-07-12 10:35:26.656047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.840 BaseBdev2 00:21:32.840 10:35:26 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:33.098 spare_malloc 00:21:33.098 10:35:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:33.357 spare_delay 00:21:33.357 10:35:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:33.616 [2024-07-12 10:35:27.278724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:33.616 [2024-07-12 10:35:27.278921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.616 [2024-07-12 10:35:27.278993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:33.616 [2024-07-12 10:35:27.279123] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.616 [2024-07-12 10:35:27.281116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.616 [2024-07-12 10:35:27.281275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:33.616 spare 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:33.616 [2024-07-12 10:35:27.462833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.616 [2024-07-12 10:35:27.464774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.616 [2024-07-12 10:35:27.465085] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:33.616 [2024-07-12 10:35:27.465202] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:33.616 [2024-07-12 10:35:27.465351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:33.616 [2024-07-12 10:35:27.465883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:33.616 [2024-07-12 10:35:27.466053] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:33.616 [2024-07-12 10:35:27.466284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.616 10:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.875 10:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:33.875 "name": "raid_bdev1", 00:21:33.875 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:33.875 "strip_size_kb": 0, 00:21:33.875 "state": "online", 00:21:33.875 "raid_level": "raid1", 00:21:33.875 "superblock": true, 00:21:33.875 "num_base_bdevs": 2, 00:21:33.875 "num_base_bdevs_discovered": 2, 00:21:33.875 "num_base_bdevs_operational": 2, 00:21:33.875 "base_bdevs_list": [ 00:21:33.875 { 00:21:33.875 "name": "BaseBdev1", 00:21:33.875 "uuid": "801db335-e7e8-5c53-9181-abcf573d3f31", 00:21:33.875 "is_configured": true, 00:21:33.875 "data_offset": 2048, 00:21:33.875 "data_size": 63488 00:21:33.875 }, 00:21:33.875 { 00:21:33.875 "name": "BaseBdev2", 00:21:33.875 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:33.875 "is_configured": true, 00:21:33.875 "data_offset": 2048, 00:21:33.875 "data_size": 63488 00:21:33.875 } 00:21:33.875 ] 00:21:33.875 }' 00:21:33.875 10:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:33.875 10:35:27 -- common/autotest_common.sh@10 -- # set +x 00:21:34.444 10:35:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:34.444 10:35:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:34.703 [2024-07-12 10:35:28.423126] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.703 10:35:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:34.703 10:35:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:34.703 10:35:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:34.962 [2024-07-12 10:35:28.765523] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:34.962 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:34.962 Zero copy mechanism will not be used. 00:21:34.962 Running I/O for 60 seconds... 00:21:34.962 [2024-07-12 10:35:28.839102] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:34.962 [2024-07-12 10:35:28.851011] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.962 10:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.222 10:35:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.222 "name": "raid_bdev1", 00:21:35.222 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:35.222 "strip_size_kb": 0, 00:21:35.222 "state": "online", 00:21:35.222 "raid_level": "raid1", 00:21:35.222 "superblock": true, 00:21:35.222 "num_base_bdevs": 2, 00:21:35.222 "num_base_bdevs_discovered": 1, 00:21:35.222 "num_base_bdevs_operational": 1, 00:21:35.222 "base_bdevs_list": [ 00:21:35.222 { 00:21:35.222 "name": null, 00:21:35.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.222 "is_configured": false, 00:21:35.222 "data_offset": 2048, 00:21:35.222 "data_size": 63488 00:21:35.222 }, 00:21:35.222 { 00:21:35.222 "name": "BaseBdev2", 00:21:35.222 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:35.222 "is_configured": true, 00:21:35.222 "data_offset": 2048, 00:21:35.222 "data_size": 63488 00:21:35.222 } 00:21:35.222 ] 00:21:35.222 }' 00:21:35.222 10:35:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.222 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:35.793 10:35:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:36.051 [2024-07-12 10:35:29.935272] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:36.051 [2024-07-12 10:35:29.935482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.309 10:35:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:36.309 [2024-07-12 10:35:29.986373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:36.309 [2024-07-12 10:35:29.988462] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:36.309 [2024-07-12 10:35:30.108649] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:36.309 [2024-07-12 10:35:30.109160] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:36.567 [2024-07-12 10:35:30.335966] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:36.567 [2024-07-12 10:35:30.336315] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:36.826 [2024-07-12 10:35:30.584810] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:36.826 [2024-07-12 10:35:30.585389] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:37.085 [2024-07-12 10:35:30.818417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:37.085 [2024-07-12 10:35:30.818759] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.085 10:35:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.343 [2024-07-12 10:35:31.039417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:37.343 10:35:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.343 "name": "raid_bdev1", 00:21:37.343 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:37.343 "strip_size_kb": 0, 00:21:37.343 "state": "online", 00:21:37.343 "raid_level": "raid1", 00:21:37.343 "superblock": true, 00:21:37.343 "num_base_bdevs": 2, 00:21:37.343 "num_base_bdevs_discovered": 2, 00:21:37.343 "num_base_bdevs_operational": 2, 00:21:37.343 "process": { 00:21:37.343 "type": "rebuild", 00:21:37.343 "target": "spare", 00:21:37.343 "progress": { 00:21:37.343 "blocks": 14336, 00:21:37.343 "percent": 22 00:21:37.343 } 00:21:37.343 }, 00:21:37.343 "base_bdevs_list": [ 00:21:37.343 { 00:21:37.343 "name": "spare", 00:21:37.343 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:37.343 "is_configured": true, 00:21:37.343 "data_offset": 2048, 00:21:37.343 "data_size": 63488 00:21:37.343 }, 00:21:37.343 { 00:21:37.343 "name": "BaseBdev2", 00:21:37.343 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:37.343 "is_configured": true, 00:21:37.343 "data_offset": 2048, 00:21:37.343 "data_size": 63488 00:21:37.343 } 00:21:37.343 ] 00:21:37.343 }' 00:21:37.343 10:35:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.343 [2024-07-12 10:35:31.253729] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:37.343 [2024-07-12 10:35:31.254070] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:37.600 10:35:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.600 10:35:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.601 10:35:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.601 10:35:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:37.859 [2024-07-12 10:35:31.544324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.859 [2024-07-12 10:35:31.613194] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:37.859 [2024-07-12 10:35:31.726298] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:37.859 [2024-07-12 10:35:31.728136] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.859 [2024-07-12 10:35:31.753349] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.117 10:35:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.117 10:35:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.117 "name": "raid_bdev1", 00:21:38.117 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:38.117 "strip_size_kb": 0, 00:21:38.117 "state": "online", 00:21:38.117 "raid_level": "raid1", 00:21:38.117 "superblock": true, 00:21:38.117 "num_base_bdevs": 2, 00:21:38.117 "num_base_bdevs_discovered": 1, 00:21:38.117 "num_base_bdevs_operational": 1, 00:21:38.117 "base_bdevs_list": [ 00:21:38.117 { 00:21:38.117 "name": null, 00:21:38.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.117 "is_configured": false, 00:21:38.117 "data_offset": 2048, 00:21:38.117 "data_size": 63488 00:21:38.117 }, 00:21:38.117 { 00:21:38.117 "name": "BaseBdev2", 00:21:38.117 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:38.117 "is_configured": true, 00:21:38.117 "data_offset": 2048, 00:21:38.117 "data_size": 63488 00:21:38.117 } 00:21:38.117 ] 00:21:38.117 }' 00:21:38.117 10:35:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.117 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:39.052 "name": "raid_bdev1", 00:21:39.052 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:39.052 "strip_size_kb": 0, 00:21:39.052 "state": "online", 00:21:39.052 "raid_level": "raid1", 00:21:39.052 "superblock": true, 00:21:39.052 "num_base_bdevs": 2, 00:21:39.052 "num_base_bdevs_discovered": 1, 00:21:39.052 "num_base_bdevs_operational": 1, 00:21:39.052 "base_bdevs_list": [ 00:21:39.052 { 00:21:39.052 "name": null, 00:21:39.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.052 "is_configured": false, 00:21:39.052 "data_offset": 2048, 00:21:39.052 "data_size": 63488 00:21:39.052 }, 00:21:39.052 { 00:21:39.052 "name": "BaseBdev2", 00:21:39.052 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:39.052 "is_configured": true, 00:21:39.052 "data_offset": 2048, 00:21:39.052 "data_size": 63488 00:21:39.052 } 00:21:39.052 ] 00:21:39.052 }' 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:39.052 10:35:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:39.309 [2024-07-12 10:35:33.192880] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:39.309 [2024-07-12 10:35:33.193075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.309 10:35:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:39.567 [2024-07-12 10:35:33.234487] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:39.567 [2024-07-12 10:35:33.236374] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:39.567 [2024-07-12 10:35:33.344471] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:39.567 [2024-07-12 10:35:33.344910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:39.567 [2024-07-12 10:35:33.464761] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:39.567 [2024-07-12 10:35:33.470874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:40.134 [2024-07-12 10:35:33.927616] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:40.391 [2024-07-12 10:35:34.148501] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:40.391 [2024-07-12 10:35:34.148963] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.391 10:35:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.649 [2024-07-12 10:35:34.356728] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:40.649 [2024-07-12 10:35:34.357122] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:40.649 10:35:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:40.649 "name": "raid_bdev1", 00:21:40.649 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:40.649 "strip_size_kb": 0, 00:21:40.649 "state": "online", 00:21:40.649 "raid_level": "raid1", 00:21:40.649 "superblock": true, 00:21:40.649 "num_base_bdevs": 2, 00:21:40.649 "num_base_bdevs_discovered": 2, 00:21:40.649 "num_base_bdevs_operational": 2, 00:21:40.649 "process": { 00:21:40.649 "type": "rebuild", 00:21:40.649 "target": "spare", 00:21:40.649 "progress": { 00:21:40.649 "blocks": 16384, 00:21:40.649 "percent": 25 00:21:40.649 } 00:21:40.649 }, 00:21:40.649 "base_bdevs_list": [ 00:21:40.649 { 00:21:40.649 "name": "spare", 00:21:40.649 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:40.649 "is_configured": true, 00:21:40.649 "data_offset": 2048, 00:21:40.649 "data_size": 63488 00:21:40.649 }, 00:21:40.649 { 00:21:40.649 "name": "BaseBdev2", 00:21:40.649 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:40.649 "is_configured": true, 00:21:40.649 "data_offset": 2048, 00:21:40.649 "data_size": 63488 00:21:40.649 } 00:21:40.649 ] 00:21:40.649 }' 00:21:40.649 10:35:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:40.649 10:35:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.649 10:35:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:40.907 [2024-07-12 10:35:34.579968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:40.907 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@657 -- # local timeout=447 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.907 10:35:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.907 [2024-07-12 10:35:34.794474] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:41.166 "name": "raid_bdev1", 00:21:41.166 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:41.166 "strip_size_kb": 0, 00:21:41.166 "state": "online", 00:21:41.166 "raid_level": "raid1", 00:21:41.166 "superblock": true, 00:21:41.166 "num_base_bdevs": 2, 00:21:41.166 "num_base_bdevs_discovered": 2, 00:21:41.166 "num_base_bdevs_operational": 2, 00:21:41.166 "process": { 00:21:41.166 "type": "rebuild", 00:21:41.166 "target": "spare", 00:21:41.166 "progress": { 00:21:41.166 "blocks": 22528, 00:21:41.166 "percent": 35 00:21:41.166 } 00:21:41.166 }, 00:21:41.166 "base_bdevs_list": [ 00:21:41.166 { 00:21:41.166 "name": "spare", 00:21:41.166 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:41.166 "is_configured": true, 00:21:41.166 "data_offset": 2048, 00:21:41.166 "data_size": 63488 00:21:41.166 }, 00:21:41.166 { 00:21:41.166 "name": "BaseBdev2", 00:21:41.166 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:41.166 "is_configured": true, 00:21:41.166 "data_offset": 2048, 00:21:41.166 "data_size": 63488 00:21:41.166 } 00:21:41.166 ] 00:21:41.166 }' 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.166 10:35:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:41.424 [2024-07-12 10:35:35.128432] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:41.991 [2024-07-12 10:35:35.901554] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.249 10:35:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.249 [2024-07-12 10:35:36.134900] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:42.508 "name": "raid_bdev1", 00:21:42.508 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:42.508 "strip_size_kb": 0, 00:21:42.508 "state": "online", 00:21:42.508 "raid_level": "raid1", 00:21:42.508 "superblock": true, 00:21:42.508 "num_base_bdevs": 2, 00:21:42.508 "num_base_bdevs_discovered": 2, 00:21:42.508 "num_base_bdevs_operational": 2, 00:21:42.508 "process": { 00:21:42.508 "type": "rebuild", 00:21:42.508 "target": "spare", 00:21:42.508 "progress": { 00:21:42.508 "blocks": 40960, 00:21:42.508 "percent": 64 00:21:42.508 } 00:21:42.508 }, 00:21:42.508 "base_bdevs_list": [ 00:21:42.508 { 00:21:42.508 "name": "spare", 00:21:42.508 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:42.508 "is_configured": true, 00:21:42.508 "data_offset": 2048, 00:21:42.508 "data_size": 63488 00:21:42.508 }, 00:21:42.508 { 00:21:42.508 "name": "BaseBdev2", 00:21:42.508 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:42.508 "is_configured": true, 00:21:42.508 "data_offset": 2048, 00:21:42.508 "data_size": 63488 00:21:42.508 } 00:21:42.508 ] 00:21:42.508 }' 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.508 10:35:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:43.106 [2024-07-12 10:35:36.748414] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:43.106 [2024-07-12 10:35:36.849754] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:43.365 [2024-07-12 10:35:37.186385] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.622 10:35:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.622 [2024-07-12 10:35:37.522332] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:43.880 "name": "raid_bdev1", 00:21:43.880 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:43.880 "strip_size_kb": 0, 00:21:43.880 "state": "online", 00:21:43.880 "raid_level": "raid1", 00:21:43.880 "superblock": true, 00:21:43.880 "num_base_bdevs": 2, 00:21:43.880 "num_base_bdevs_discovered": 2, 00:21:43.880 "num_base_bdevs_operational": 2, 00:21:43.880 "process": { 00:21:43.880 "type": "rebuild", 00:21:43.880 "target": "spare", 00:21:43.880 "progress": { 00:21:43.880 "blocks": 63488, 00:21:43.880 "percent": 100 00:21:43.880 } 00:21:43.880 }, 00:21:43.880 "base_bdevs_list": [ 00:21:43.880 { 00:21:43.880 "name": "spare", 00:21:43.880 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:43.880 "is_configured": true, 00:21:43.880 "data_offset": 2048, 00:21:43.880 "data_size": 63488 00:21:43.880 }, 00:21:43.880 { 00:21:43.880 "name": "BaseBdev2", 00:21:43.880 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:43.880 "is_configured": true, 00:21:43.880 "data_offset": 2048, 00:21:43.880 "data_size": 63488 00:21:43.880 } 00:21:43.880 ] 00:21:43.880 }' 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:43.880 [2024-07-12 10:35:37.628224] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:43.880 [2024-07-12 10:35:37.630720] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.880 10:35:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:44.810 10:35:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.811 10:35:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.068 10:35:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.068 "name": "raid_bdev1", 00:21:45.068 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:45.068 "strip_size_kb": 0, 00:21:45.068 "state": "online", 00:21:45.068 "raid_level": "raid1", 00:21:45.068 "superblock": true, 00:21:45.068 "num_base_bdevs": 2, 00:21:45.068 "num_base_bdevs_discovered": 2, 00:21:45.068 "num_base_bdevs_operational": 2, 00:21:45.068 "base_bdevs_list": [ 00:21:45.068 { 00:21:45.068 "name": "spare", 00:21:45.068 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:45.068 "is_configured": true, 00:21:45.068 "data_offset": 2048, 00:21:45.068 "data_size": 63488 00:21:45.068 }, 00:21:45.068 { 00:21:45.068 "name": "BaseBdev2", 00:21:45.068 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:45.068 "is_configured": true, 00:21:45.068 "data_offset": 2048, 00:21:45.068 "data_size": 63488 00:21:45.068 } 00:21:45.068 ] 00:21:45.068 }' 00:21:45.068 10:35:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.068 10:35:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:45.068 10:35:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.324 10:35:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:45.324 10:35:39 -- bdev/bdev_raid.sh@660 -- # break 00:21:45.324 10:35:39 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.324 10:35:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.324 10:35:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:45.325 10:35:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:45.325 10:35:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.325 10:35:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.325 10:35:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.582 "name": "raid_bdev1", 00:21:45.582 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:45.582 "strip_size_kb": 0, 00:21:45.582 "state": "online", 00:21:45.582 "raid_level": "raid1", 00:21:45.582 "superblock": true, 00:21:45.582 "num_base_bdevs": 2, 00:21:45.582 "num_base_bdevs_discovered": 2, 00:21:45.582 "num_base_bdevs_operational": 2, 00:21:45.582 "base_bdevs_list": [ 00:21:45.582 { 00:21:45.582 "name": "spare", 00:21:45.582 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:45.582 "is_configured": true, 00:21:45.582 "data_offset": 2048, 00:21:45.582 "data_size": 63488 00:21:45.582 }, 00:21:45.582 { 00:21:45.582 "name": "BaseBdev2", 00:21:45.582 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:45.582 "is_configured": true, 00:21:45.582 "data_offset": 2048, 00:21:45.582 "data_size": 63488 00:21:45.582 } 00:21:45.582 ] 00:21:45.582 }' 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.582 10:35:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.840 10:35:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.840 "name": "raid_bdev1", 00:21:45.840 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:45.840 "strip_size_kb": 0, 00:21:45.840 "state": "online", 00:21:45.840 "raid_level": "raid1", 00:21:45.840 "superblock": true, 00:21:45.840 "num_base_bdevs": 2, 00:21:45.840 "num_base_bdevs_discovered": 2, 00:21:45.840 "num_base_bdevs_operational": 2, 00:21:45.840 "base_bdevs_list": [ 00:21:45.840 { 00:21:45.840 "name": "spare", 00:21:45.840 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:45.840 "is_configured": true, 00:21:45.840 "data_offset": 2048, 00:21:45.840 "data_size": 63488 00:21:45.840 }, 00:21:45.840 { 00:21:45.840 "name": "BaseBdev2", 00:21:45.840 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:45.840 "is_configured": true, 00:21:45.840 "data_offset": 2048, 00:21:45.840 "data_size": 63488 00:21:45.840 } 00:21:45.840 ] 00:21:45.840 }' 00:21:45.840 10:35:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.840 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:21:46.406 10:35:40 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.664 [2024-07-12 10:35:40.430751] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.664 [2024-07-12 10:35:40.430942] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.664 00:21:46.664 Latency(us) 00:21:46.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.664 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:46.664 raid_bdev1 : 11.76 112.22 336.65 0.00 0.00 12626.91 305.34 113436.86 00:21:46.664 =================================================================================================================== 00:21:46.664 Total : 112.22 336.65 0.00 0.00 12626.91 305.34 113436.86 00:21:46.664 [2024-07-12 10:35:40.545519] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.664 [2024-07-12 10:35:40.545680] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.664 [2024-07-12 10:35:40.545801] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.664 0 00:21:46.664 [2024-07-12 10:35:40.545915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:46.664 10:35:40 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.664 10:35:40 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:46.926 10:35:40 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:46.926 10:35:40 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:46.926 10:35:40 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@12 -- # local i 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.926 10:35:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:47.185 /dev/nbd0 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:47.185 10:35:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:47.185 10:35:41 -- common/autotest_common.sh@857 -- # local i 00:21:47.185 10:35:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:47.185 10:35:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:47.185 10:35:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:47.185 10:35:41 -- common/autotest_common.sh@861 -- # break 00:21:47.185 10:35:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:47.185 10:35:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:47.185 10:35:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.185 1+0 records in 00:21:47.185 1+0 records out 00:21:47.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514925 s, 8.0 MB/s 00:21:47.185 10:35:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.185 10:35:41 -- common/autotest_common.sh@874 -- # size=4096 00:21:47.185 10:35:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.185 10:35:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:47.185 10:35:41 -- common/autotest_common.sh@877 -- # return 0 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.185 10:35:41 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:47.185 10:35:41 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:47.185 10:35:41 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@12 -- # local i 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.185 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:47.453 /dev/nbd1 00:21:47.453 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:47.453 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:47.453 10:35:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:47.453 10:35:41 -- common/autotest_common.sh@857 -- # local i 00:21:47.453 10:35:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:47.453 10:35:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:47.453 10:35:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:47.453 10:35:41 -- common/autotest_common.sh@861 -- # break 00:21:47.453 10:35:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:47.453 10:35:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:47.453 10:35:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.453 1+0 records in 00:21:47.453 1+0 records out 00:21:47.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733901 s, 5.6 MB/s 00:21:47.453 10:35:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.453 10:35:41 -- common/autotest_common.sh@874 -- # size=4096 00:21:47.453 10:35:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.453 10:35:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:47.453 10:35:41 -- common/autotest_common.sh@877 -- # return 0 00:21:47.453 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.453 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.453 10:35:41 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:47.752 10:35:41 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@51 -- # local i 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.752 10:35:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@41 -- # break 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.057 10:35:41 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@51 -- # local i 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.057 10:35:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:48.314 10:35:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.315 10:35:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.315 10:35:42 -- bdev/nbd_common.sh@41 -- # break 00:21:48.315 10:35:42 -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.315 10:35:42 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:48.315 10:35:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:48.315 10:35:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:48.315 10:35:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:48.572 10:35:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:48.831 [2024-07-12 10:35:42.648992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:48.831 [2024-07-12 10:35:42.649069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.831 [2024-07-12 10:35:42.649105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:48.831 [2024-07-12 10:35:42.649132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.831 [2024-07-12 10:35:42.651076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.831 [2024-07-12 10:35:42.651142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:48.831 [2024-07-12 10:35:42.651244] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:48.831 [2024-07-12 10:35:42.651300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.831 BaseBdev1 00:21:48.831 10:35:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:48.831 10:35:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:48.831 10:35:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:49.088 10:35:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:49.364 [2024-07-12 10:35:43.089122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:49.364 [2024-07-12 10:35:43.089179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.364 [2024-07-12 10:35:43.089208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:49.364 [2024-07-12 10:35:43.089232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.364 [2024-07-12 10:35:43.089573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.364 [2024-07-12 10:35:43.089630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:49.364 [2024-07-12 10:35:43.089715] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:49.364 [2024-07-12 10:35:43.089729] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:49.364 [2024-07-12 10:35:43.089735] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.364 [2024-07-12 10:35:43.089750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:49.364 [2024-07-12 10:35:43.089805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.364 BaseBdev2 00:21:49.364 10:35:43 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:49.622 [2024-07-12 10:35:43.452811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:49.622 [2024-07-12 10:35:43.452871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.622 [2024-07-12 10:35:43.452906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:49.622 [2024-07-12 10:35:43.452925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.622 [2024-07-12 10:35:43.453315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.622 [2024-07-12 10:35:43.453369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:49.622 [2024-07-12 10:35:43.453467] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:49.622 [2024-07-12 10:35:43.453493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.622 spare 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.622 10:35:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.880 [2024-07-12 10:35:43.553582] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:49.880 [2024-07-12 10:35:43.553603] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:49.880 [2024-07-12 10:35:43.553694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c930 00:21:49.880 [2024-07-12 10:35:43.554045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:49.880 [2024-07-12 10:35:43.554067] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:49.880 [2024-07-12 10:35:43.554244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.880 10:35:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.880 "name": "raid_bdev1", 00:21:49.880 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:49.880 "strip_size_kb": 0, 00:21:49.880 "state": "online", 00:21:49.880 "raid_level": "raid1", 00:21:49.880 "superblock": true, 00:21:49.880 "num_base_bdevs": 2, 00:21:49.880 "num_base_bdevs_discovered": 2, 00:21:49.880 "num_base_bdevs_operational": 2, 00:21:49.880 "base_bdevs_list": [ 00:21:49.880 { 00:21:49.880 "name": "spare", 00:21:49.880 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:49.880 "is_configured": true, 00:21:49.880 "data_offset": 2048, 00:21:49.880 "data_size": 63488 00:21:49.880 }, 00:21:49.880 { 00:21:49.880 "name": "BaseBdev2", 00:21:49.880 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:49.880 "is_configured": true, 00:21:49.880 "data_offset": 2048, 00:21:49.880 "data_size": 63488 00:21:49.880 } 00:21:49.880 ] 00:21:49.880 }' 00:21:49.880 10:35:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.880 10:35:43 -- common/autotest_common.sh@10 -- # set +x 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.445 10:35:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.704 10:35:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.704 "name": "raid_bdev1", 00:21:50.704 "uuid": "1fa76749-2d25-43bf-9022-ad20b4de2234", 00:21:50.704 "strip_size_kb": 0, 00:21:50.704 "state": "online", 00:21:50.704 "raid_level": "raid1", 00:21:50.704 "superblock": true, 00:21:50.704 "num_base_bdevs": 2, 00:21:50.704 "num_base_bdevs_discovered": 2, 00:21:50.704 "num_base_bdevs_operational": 2, 00:21:50.704 "base_bdevs_list": [ 00:21:50.704 { 00:21:50.704 "name": "spare", 00:21:50.704 "uuid": "7fce876f-6a59-5b00-b152-9e44c56d8aa7", 00:21:50.704 "is_configured": true, 00:21:50.705 "data_offset": 2048, 00:21:50.705 "data_size": 63488 00:21:50.705 }, 00:21:50.705 { 00:21:50.705 "name": "BaseBdev2", 00:21:50.705 "uuid": "505cd83d-9e9e-5c70-880f-02bbc26d12b9", 00:21:50.705 "is_configured": true, 00:21:50.705 "data_offset": 2048, 00:21:50.705 "data_size": 63488 00:21:50.705 } 00:21:50.705 ] 00:21:50.705 }' 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.705 10:35:44 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:50.962 10:35:44 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.962 10:35:44 -- bdev/bdev_raid.sh@709 -- # killprocess 127514 00:21:50.962 10:35:44 -- common/autotest_common.sh@926 -- # '[' -z 127514 ']' 00:21:50.962 10:35:44 -- common/autotest_common.sh@930 -- # kill -0 127514 00:21:50.962 10:35:44 -- common/autotest_common.sh@931 -- # uname 00:21:50.962 10:35:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:50.962 10:35:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127514 00:21:51.219 killing process with pid 127514 00:21:51.219 Received shutdown signal, test time was about 16.117221 seconds 00:21:51.219 00:21:51.219 Latency(us) 00:21:51.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.219 =================================================================================================================== 00:21:51.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.219 10:35:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:51.219 10:35:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:51.219 10:35:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127514' 00:21:51.219 10:35:44 -- common/autotest_common.sh@945 -- # kill 127514 00:21:51.219 10:35:44 -- common/autotest_common.sh@950 -- # wait 127514 00:21:51.219 [2024-07-12 10:35:44.884826] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:51.219 [2024-07-12 10:35:44.884890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.219 [2024-07-12 10:35:44.884943] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.219 [2024-07-12 10:35:44.884965] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:51.219 [2024-07-12 10:35:45.035074] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.152 ************************************ 00:21:52.152 END TEST raid_rebuild_test_sb_io 00:21:52.152 ************************************ 00:21:52.152 10:35:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:52.152 00:21:52.152 real 0m21.138s 00:21:52.152 user 0m33.676s 00:21:52.152 sys 0m1.972s 00:21:52.152 10:35:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.152 10:35:45 -- common/autotest_common.sh@10 -- # set +x 00:21:52.152 10:35:46 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:52.153 10:35:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:52.153 10:35:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:52.153 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:21:52.153 ************************************ 00:21:52.153 START TEST raid_rebuild_test 00:21:52.153 ************************************ 00:21:52.153 10:35:46 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=128108 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128108 /var/tmp/spdk-raid.sock 00:21:52.153 10:35:46 -- common/autotest_common.sh@819 -- # '[' -z 128108 ']' 00:21:52.153 10:35:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:52.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:52.153 10:35:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:52.153 10:35:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:52.153 10:35:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:52.153 10:35:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:52.153 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:21:52.411 [2024-07-12 10:35:46.113592] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:52.411 [2024-07-12 10:35:46.113774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128108 ] 00:21:52.411 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:52.411 Zero copy mechanism will not be used. 00:21:52.411 [2024-07-12 10:35:46.277740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.671 [2024-07-12 10:35:46.437941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.929 [2024-07-12 10:35:46.602022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.187 10:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:53.187 10:35:46 -- common/autotest_common.sh@852 -- # return 0 00:21:53.187 10:35:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:53.187 10:35:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:53.187 10:35:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.445 BaseBdev1 00:21:53.445 10:35:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:53.445 10:35:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:53.445 10:35:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:53.703 BaseBdev2 00:21:53.703 10:35:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:53.703 10:35:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:53.703 10:35:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:53.961 BaseBdev3 00:21:53.961 10:35:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:53.961 10:35:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:53.961 10:35:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:54.218 BaseBdev4 00:21:54.218 10:35:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:54.476 spare_malloc 00:21:54.476 10:35:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:54.476 spare_delay 00:21:54.476 10:35:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:54.737 [2024-07-12 10:35:48.539213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:54.737 [2024-07-12 10:35:48.539299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.737 [2024-07-12 10:35:48.539331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:54.737 [2024-07-12 10:35:48.539387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.737 [2024-07-12 10:35:48.541519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.737 [2024-07-12 10:35:48.541566] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:54.737 spare 00:21:54.737 10:35:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:54.995 [2024-07-12 10:35:48.719271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.995 [2024-07-12 10:35:48.721033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.995 [2024-07-12 10:35:48.721086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.995 [2024-07-12 10:35:48.721123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:54.995 [2024-07-12 10:35:48.721191] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:54.995 [2024-07-12 10:35:48.721203] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:54.995 [2024-07-12 10:35:48.721342] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:54.995 [2024-07-12 10:35:48.721681] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:54.995 [2024-07-12 10:35:48.721701] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:54.995 [2024-07-12 10:35:48.721843] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.995 10:35:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.258 10:35:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.258 "name": "raid_bdev1", 00:21:55.258 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:21:55.258 "strip_size_kb": 0, 00:21:55.258 "state": "online", 00:21:55.258 "raid_level": "raid1", 00:21:55.258 "superblock": false, 00:21:55.258 "num_base_bdevs": 4, 00:21:55.259 "num_base_bdevs_discovered": 4, 00:21:55.259 "num_base_bdevs_operational": 4, 00:21:55.259 "base_bdevs_list": [ 00:21:55.259 { 00:21:55.259 "name": "BaseBdev1", 00:21:55.259 "uuid": "b0d0d8c3-b93a-4b01-963c-cfb88bf7813a", 00:21:55.259 "is_configured": true, 00:21:55.259 "data_offset": 0, 00:21:55.259 "data_size": 65536 00:21:55.259 }, 00:21:55.259 { 00:21:55.259 "name": "BaseBdev2", 00:21:55.259 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:21:55.259 "is_configured": true, 00:21:55.259 "data_offset": 0, 00:21:55.259 "data_size": 65536 00:21:55.259 }, 00:21:55.259 { 00:21:55.259 "name": "BaseBdev3", 00:21:55.259 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:21:55.259 "is_configured": true, 00:21:55.259 "data_offset": 0, 00:21:55.259 "data_size": 65536 00:21:55.259 }, 00:21:55.259 { 00:21:55.259 "name": "BaseBdev4", 00:21:55.259 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:21:55.259 "is_configured": true, 00:21:55.259 "data_offset": 0, 00:21:55.259 "data_size": 65536 00:21:55.259 } 00:21:55.259 ] 00:21:55.259 }' 00:21:55.259 10:35:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.259 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:21:55.825 10:35:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:55.825 10:35:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:55.825 [2024-07-12 10:35:49.739634] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.084 10:35:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:56.084 10:35:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.084 10:35:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:56.343 10:35:50 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:56.343 10:35:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:56.343 10:35:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:56.343 10:35:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@12 -- # local i 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:56.343 10:35:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:56.343 [2024-07-12 10:35:50.255540] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:56.601 /dev/nbd0 00:21:56.602 10:35:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.602 10:35:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.602 10:35:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:56.602 10:35:50 -- common/autotest_common.sh@857 -- # local i 00:21:56.602 10:35:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:56.602 10:35:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:56.602 10:35:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:56.602 10:35:50 -- common/autotest_common.sh@861 -- # break 00:21:56.602 10:35:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:56.602 10:35:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:56.602 10:35:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.602 1+0 records in 00:21:56.602 1+0 records out 00:21:56.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213374 s, 19.2 MB/s 00:21:56.602 10:35:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.602 10:35:50 -- common/autotest_common.sh@874 -- # size=4096 00:21:56.602 10:35:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.602 10:35:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:56.602 10:35:50 -- common/autotest_common.sh@877 -- # return 0 00:21:56.602 10:35:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.602 10:35:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:56.602 10:35:50 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:56.602 10:35:50 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:56.602 10:35:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:01.866 65536+0 records in 00:22:01.866 65536+0 records out 00:22:01.866 33554432 bytes (34 MB, 32 MiB) copied, 5.35504 s, 6.3 MB/s 00:22:01.866 10:35:55 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@51 -- # local i 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.866 10:35:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:02.124 10:35:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.124 10:35:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.125 10:35:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.125 10:35:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.125 10:35:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.125 10:35:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.125 10:35:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:02.125 [2024-07-12 10:35:55.946381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.383 10:35:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:02.383 10:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.383 10:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.383 10:35:56 -- bdev/nbd_common.sh@41 -- # break 00:22:02.383 10:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:02.383 [2024-07-12 10:35:56.214097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.383 10:35:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.641 10:35:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.641 "name": "raid_bdev1", 00:22:02.641 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:02.641 "strip_size_kb": 0, 00:22:02.641 "state": "online", 00:22:02.641 "raid_level": "raid1", 00:22:02.641 "superblock": false, 00:22:02.641 "num_base_bdevs": 4, 00:22:02.641 "num_base_bdevs_discovered": 3, 00:22:02.641 "num_base_bdevs_operational": 3, 00:22:02.641 "base_bdevs_list": [ 00:22:02.641 { 00:22:02.641 "name": null, 00:22:02.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.641 "is_configured": false, 00:22:02.641 "data_offset": 0, 00:22:02.641 "data_size": 65536 00:22:02.641 }, 00:22:02.641 { 00:22:02.641 "name": "BaseBdev2", 00:22:02.641 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:22:02.641 "is_configured": true, 00:22:02.641 "data_offset": 0, 00:22:02.641 "data_size": 65536 00:22:02.641 }, 00:22:02.641 { 00:22:02.641 "name": "BaseBdev3", 00:22:02.641 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:02.641 "is_configured": true, 00:22:02.641 "data_offset": 0, 00:22:02.641 "data_size": 65536 00:22:02.641 }, 00:22:02.641 { 00:22:02.641 "name": "BaseBdev4", 00:22:02.641 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:02.641 "is_configured": true, 00:22:02.641 "data_offset": 0, 00:22:02.641 "data_size": 65536 00:22:02.641 } 00:22:02.641 ] 00:22:02.641 }' 00:22:02.641 10:35:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.641 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:22:03.208 10:35:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.466 [2024-07-12 10:35:57.262441] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:03.466 [2024-07-12 10:35:57.262477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.466 [2024-07-12 10:35:57.272897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:22:03.466 [2024-07-12 10:35:57.274757] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.466 10:35:57 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.401 10:35:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.660 10:35:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:04.660 "name": "raid_bdev1", 00:22:04.660 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:04.660 "strip_size_kb": 0, 00:22:04.660 "state": "online", 00:22:04.660 "raid_level": "raid1", 00:22:04.660 "superblock": false, 00:22:04.660 "num_base_bdevs": 4, 00:22:04.660 "num_base_bdevs_discovered": 4, 00:22:04.660 "num_base_bdevs_operational": 4, 00:22:04.660 "process": { 00:22:04.660 "type": "rebuild", 00:22:04.660 "target": "spare", 00:22:04.660 "progress": { 00:22:04.660 "blocks": 24576, 00:22:04.660 "percent": 37 00:22:04.660 } 00:22:04.660 }, 00:22:04.660 "base_bdevs_list": [ 00:22:04.660 { 00:22:04.660 "name": "spare", 00:22:04.660 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:04.660 "is_configured": true, 00:22:04.660 "data_offset": 0, 00:22:04.660 "data_size": 65536 00:22:04.660 }, 00:22:04.660 { 00:22:04.660 "name": "BaseBdev2", 00:22:04.660 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:22:04.660 "is_configured": true, 00:22:04.660 "data_offset": 0, 00:22:04.660 "data_size": 65536 00:22:04.660 }, 00:22:04.660 { 00:22:04.660 "name": "BaseBdev3", 00:22:04.660 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:04.660 "is_configured": true, 00:22:04.660 "data_offset": 0, 00:22:04.660 "data_size": 65536 00:22:04.660 }, 00:22:04.660 { 00:22:04.660 "name": "BaseBdev4", 00:22:04.660 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:04.660 "is_configured": true, 00:22:04.660 "data_offset": 0, 00:22:04.660 "data_size": 65536 00:22:04.660 } 00:22:04.660 ] 00:22:04.660 }' 00:22:04.660 10:35:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:04.660 10:35:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.660 10:35:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:04.919 10:35:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:04.919 10:35:58 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:04.919 [2024-07-12 10:35:58.829114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.178 [2024-07-12 10:35:58.882898] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:05.178 [2024-07-12 10:35:58.882996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.178 10:35:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.178 10:35:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.178 "name": "raid_bdev1", 00:22:05.178 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:05.178 "strip_size_kb": 0, 00:22:05.178 "state": "online", 00:22:05.178 "raid_level": "raid1", 00:22:05.178 "superblock": false, 00:22:05.178 "num_base_bdevs": 4, 00:22:05.178 "num_base_bdevs_discovered": 3, 00:22:05.178 "num_base_bdevs_operational": 3, 00:22:05.178 "base_bdevs_list": [ 00:22:05.178 { 00:22:05.178 "name": null, 00:22:05.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.178 "is_configured": false, 00:22:05.178 "data_offset": 0, 00:22:05.178 "data_size": 65536 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "BaseBdev2", 00:22:05.178 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 0, 00:22:05.178 "data_size": 65536 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "BaseBdev3", 00:22:05.178 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 0, 00:22:05.178 "data_size": 65536 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "BaseBdev4", 00:22:05.178 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 0, 00:22:05.178 "data_size": 65536 00:22:05.178 } 00:22:05.178 ] 00:22:05.178 }' 00:22:05.178 10:35:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.178 10:35:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:06.114 "name": "raid_bdev1", 00:22:06.114 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:06.114 "strip_size_kb": 0, 00:22:06.114 "state": "online", 00:22:06.114 "raid_level": "raid1", 00:22:06.114 "superblock": false, 00:22:06.114 "num_base_bdevs": 4, 00:22:06.114 "num_base_bdevs_discovered": 3, 00:22:06.114 "num_base_bdevs_operational": 3, 00:22:06.114 "base_bdevs_list": [ 00:22:06.114 { 00:22:06.114 "name": null, 00:22:06.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.114 "is_configured": false, 00:22:06.114 "data_offset": 0, 00:22:06.114 "data_size": 65536 00:22:06.114 }, 00:22:06.114 { 00:22:06.114 "name": "BaseBdev2", 00:22:06.114 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:22:06.114 "is_configured": true, 00:22:06.114 "data_offset": 0, 00:22:06.114 "data_size": 65536 00:22:06.114 }, 00:22:06.114 { 00:22:06.114 "name": "BaseBdev3", 00:22:06.114 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:06.114 "is_configured": true, 00:22:06.114 "data_offset": 0, 00:22:06.114 "data_size": 65536 00:22:06.114 }, 00:22:06.114 { 00:22:06.114 "name": "BaseBdev4", 00:22:06.114 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:06.114 "is_configured": true, 00:22:06.114 "data_offset": 0, 00:22:06.114 "data_size": 65536 00:22:06.114 } 00:22:06.114 ] 00:22:06.114 }' 00:22:06.114 10:35:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:06.373 10:36:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.373 10:36:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:06.373 10:36:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:06.373 10:36:00 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.373 [2024-07-12 10:36:00.280735] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:06.373 [2024-07-12 10:36:00.280777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.632 [2024-07-12 10:36:00.290466] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:22:06.632 [2024-07-12 10:36:00.292119] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.632 10:36:00 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.568 10:36:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:07.826 "name": "raid_bdev1", 00:22:07.826 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:07.826 "strip_size_kb": 0, 00:22:07.826 "state": "online", 00:22:07.826 "raid_level": "raid1", 00:22:07.826 "superblock": false, 00:22:07.826 "num_base_bdevs": 4, 00:22:07.826 "num_base_bdevs_discovered": 4, 00:22:07.826 "num_base_bdevs_operational": 4, 00:22:07.826 "process": { 00:22:07.826 "type": "rebuild", 00:22:07.826 "target": "spare", 00:22:07.826 "progress": { 00:22:07.826 "blocks": 24576, 00:22:07.826 "percent": 37 00:22:07.826 } 00:22:07.826 }, 00:22:07.826 "base_bdevs_list": [ 00:22:07.826 { 00:22:07.826 "name": "spare", 00:22:07.826 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 0, 00:22:07.826 "data_size": 65536 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "name": "BaseBdev2", 00:22:07.826 "uuid": "622c0c74-81ac-4646-8b38-d02e5cca8d7e", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 0, 00:22:07.826 "data_size": 65536 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "name": "BaseBdev3", 00:22:07.826 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 0, 00:22:07.826 "data_size": 65536 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "name": "BaseBdev4", 00:22:07.826 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 0, 00:22:07.826 "data_size": 65536 00:22:07.826 } 00:22:07.826 ] 00:22:07.826 }' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:07.826 10:36:01 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:08.085 [2024-07-12 10:36:01.858481] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.085 [2024-07-12 10:36:01.900215] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.085 10:36:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.343 10:36:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.343 "name": "raid_bdev1", 00:22:08.343 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:08.343 "strip_size_kb": 0, 00:22:08.343 "state": "online", 00:22:08.343 "raid_level": "raid1", 00:22:08.343 "superblock": false, 00:22:08.343 "num_base_bdevs": 4, 00:22:08.343 "num_base_bdevs_discovered": 3, 00:22:08.343 "num_base_bdevs_operational": 3, 00:22:08.343 "process": { 00:22:08.343 "type": "rebuild", 00:22:08.343 "target": "spare", 00:22:08.343 "progress": { 00:22:08.343 "blocks": 36864, 00:22:08.343 "percent": 56 00:22:08.343 } 00:22:08.343 }, 00:22:08.343 "base_bdevs_list": [ 00:22:08.343 { 00:22:08.343 "name": "spare", 00:22:08.343 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:08.343 "is_configured": true, 00:22:08.343 "data_offset": 0, 00:22:08.343 "data_size": 65536 00:22:08.343 }, 00:22:08.343 { 00:22:08.343 "name": null, 00:22:08.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.343 "is_configured": false, 00:22:08.343 "data_offset": 0, 00:22:08.343 "data_size": 65536 00:22:08.343 }, 00:22:08.343 { 00:22:08.343 "name": "BaseBdev3", 00:22:08.343 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:08.343 "is_configured": true, 00:22:08.343 "data_offset": 0, 00:22:08.343 "data_size": 65536 00:22:08.343 }, 00:22:08.343 { 00:22:08.343 "name": "BaseBdev4", 00:22:08.343 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:08.343 "is_configured": true, 00:22:08.343 "data_offset": 0, 00:22:08.343 "data_size": 65536 00:22:08.343 } 00:22:08.343 ] 00:22:08.343 }' 00:22:08.343 10:36:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.343 10:36:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.343 10:36:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@657 -- # local timeout=475 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.602 10:36:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.602 "name": "raid_bdev1", 00:22:08.602 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:08.602 "strip_size_kb": 0, 00:22:08.602 "state": "online", 00:22:08.602 "raid_level": "raid1", 00:22:08.602 "superblock": false, 00:22:08.602 "num_base_bdevs": 4, 00:22:08.602 "num_base_bdevs_discovered": 3, 00:22:08.602 "num_base_bdevs_operational": 3, 00:22:08.602 "process": { 00:22:08.602 "type": "rebuild", 00:22:08.602 "target": "spare", 00:22:08.602 "progress": { 00:22:08.602 "blocks": 43008, 00:22:08.602 "percent": 65 00:22:08.602 } 00:22:08.602 }, 00:22:08.602 "base_bdevs_list": [ 00:22:08.602 { 00:22:08.602 "name": "spare", 00:22:08.602 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:08.602 "is_configured": true, 00:22:08.602 "data_offset": 0, 00:22:08.602 "data_size": 65536 00:22:08.602 }, 00:22:08.602 { 00:22:08.602 "name": null, 00:22:08.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.602 "is_configured": false, 00:22:08.602 "data_offset": 0, 00:22:08.602 "data_size": 65536 00:22:08.602 }, 00:22:08.602 { 00:22:08.602 "name": "BaseBdev3", 00:22:08.602 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:08.602 "is_configured": true, 00:22:08.602 "data_offset": 0, 00:22:08.602 "data_size": 65536 00:22:08.602 }, 00:22:08.602 { 00:22:08.602 "name": "BaseBdev4", 00:22:08.602 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:08.602 "is_configured": true, 00:22:08.602 "data_offset": 0, 00:22:08.602 "data_size": 65536 00:22:08.602 } 00:22:08.602 ] 00:22:08.602 }' 00:22:08.860 10:36:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.860 10:36:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.860 10:36:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.860 10:36:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.860 10:36:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:09.794 [2024-07-12 10:36:03.508288] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:09.794 [2024-07-12 10:36:03.508357] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:09.794 [2024-07-12 10:36:03.508424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.794 10:36:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.052 10:36:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.052 "name": "raid_bdev1", 00:22:10.052 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:10.052 "strip_size_kb": 0, 00:22:10.052 "state": "online", 00:22:10.052 "raid_level": "raid1", 00:22:10.052 "superblock": false, 00:22:10.052 "num_base_bdevs": 4, 00:22:10.052 "num_base_bdevs_discovered": 3, 00:22:10.052 "num_base_bdevs_operational": 3, 00:22:10.052 "base_bdevs_list": [ 00:22:10.052 { 00:22:10.052 "name": "spare", 00:22:10.052 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:10.052 "is_configured": true, 00:22:10.052 "data_offset": 0, 00:22:10.052 "data_size": 65536 00:22:10.052 }, 00:22:10.052 { 00:22:10.052 "name": null, 00:22:10.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.052 "is_configured": false, 00:22:10.052 "data_offset": 0, 00:22:10.052 "data_size": 65536 00:22:10.052 }, 00:22:10.052 { 00:22:10.052 "name": "BaseBdev3", 00:22:10.052 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:10.052 "is_configured": true, 00:22:10.052 "data_offset": 0, 00:22:10.052 "data_size": 65536 00:22:10.052 }, 00:22:10.052 { 00:22:10.052 "name": "BaseBdev4", 00:22:10.052 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:10.052 "is_configured": true, 00:22:10.052 "data_offset": 0, 00:22:10.052 "data_size": 65536 00:22:10.052 } 00:22:10.052 ] 00:22:10.052 }' 00:22:10.052 10:36:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.052 10:36:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:10.052 10:36:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@660 -- # break 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.311 10:36:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.311 10:36:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.311 "name": "raid_bdev1", 00:22:10.311 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:10.311 "strip_size_kb": 0, 00:22:10.311 "state": "online", 00:22:10.311 "raid_level": "raid1", 00:22:10.311 "superblock": false, 00:22:10.311 "num_base_bdevs": 4, 00:22:10.311 "num_base_bdevs_discovered": 3, 00:22:10.311 "num_base_bdevs_operational": 3, 00:22:10.311 "base_bdevs_list": [ 00:22:10.311 { 00:22:10.311 "name": "spare", 00:22:10.311 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:10.311 "is_configured": true, 00:22:10.311 "data_offset": 0, 00:22:10.311 "data_size": 65536 00:22:10.311 }, 00:22:10.311 { 00:22:10.311 "name": null, 00:22:10.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.311 "is_configured": false, 00:22:10.311 "data_offset": 0, 00:22:10.311 "data_size": 65536 00:22:10.311 }, 00:22:10.311 { 00:22:10.311 "name": "BaseBdev3", 00:22:10.311 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:10.311 "is_configured": true, 00:22:10.311 "data_offset": 0, 00:22:10.311 "data_size": 65536 00:22:10.311 }, 00:22:10.311 { 00:22:10.311 "name": "BaseBdev4", 00:22:10.311 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:10.311 "is_configured": true, 00:22:10.311 "data_offset": 0, 00:22:10.311 "data_size": 65536 00:22:10.311 } 00:22:10.311 ] 00:22:10.311 }' 00:22:10.311 10:36:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.569 10:36:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.570 10:36:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.570 10:36:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.570 10:36:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.570 10:36:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.570 10:36:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.828 10:36:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.828 "name": "raid_bdev1", 00:22:10.828 "uuid": "1e9195df-5268-4c87-966a-a24943170a53", 00:22:10.828 "strip_size_kb": 0, 00:22:10.828 "state": "online", 00:22:10.828 "raid_level": "raid1", 00:22:10.828 "superblock": false, 00:22:10.828 "num_base_bdevs": 4, 00:22:10.828 "num_base_bdevs_discovered": 3, 00:22:10.828 "num_base_bdevs_operational": 3, 00:22:10.828 "base_bdevs_list": [ 00:22:10.828 { 00:22:10.828 "name": "spare", 00:22:10.828 "uuid": "6ff17adf-95a2-542c-9a4c-56efa5f089e7", 00:22:10.828 "is_configured": true, 00:22:10.828 "data_offset": 0, 00:22:10.828 "data_size": 65536 00:22:10.828 }, 00:22:10.828 { 00:22:10.828 "name": null, 00:22:10.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.828 "is_configured": false, 00:22:10.828 "data_offset": 0, 00:22:10.828 "data_size": 65536 00:22:10.828 }, 00:22:10.828 { 00:22:10.829 "name": "BaseBdev3", 00:22:10.829 "uuid": "cfb5d791-9f34-4c4a-abdc-37cd892dc12b", 00:22:10.829 "is_configured": true, 00:22:10.829 "data_offset": 0, 00:22:10.829 "data_size": 65536 00:22:10.829 }, 00:22:10.829 { 00:22:10.829 "name": "BaseBdev4", 00:22:10.829 "uuid": "6d47897f-ba90-43d6-8c33-9938ad3e36e9", 00:22:10.829 "is_configured": true, 00:22:10.829 "data_offset": 0, 00:22:10.829 "data_size": 65536 00:22:10.829 } 00:22:10.829 ] 00:22:10.829 }' 00:22:10.829 10:36:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.829 10:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 10:36:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:11.652 [2024-07-12 10:36:05.347342] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.652 [2024-07-12 10:36:05.347399] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.652 [2024-07-12 10:36:05.347471] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.652 [2024-07-12 10:36:05.347546] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.652 [2024-07-12 10:36:05.347560] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:11.652 10:36:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.652 10:36:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:11.910 10:36:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:11.910 10:36:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:11.910 10:36:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@12 -- # local i 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:11.910 /dev/nbd0 00:22:11.910 10:36:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:12.168 10:36:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:12.168 10:36:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:12.168 10:36:05 -- common/autotest_common.sh@857 -- # local i 00:22:12.168 10:36:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:12.168 10:36:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:12.168 10:36:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:12.168 10:36:05 -- common/autotest_common.sh@861 -- # break 00:22:12.168 10:36:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:12.168 10:36:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:12.168 10:36:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:12.168 1+0 records in 00:22:12.168 1+0 records out 00:22:12.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507538 s, 8.1 MB/s 00:22:12.168 10:36:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.168 10:36:05 -- common/autotest_common.sh@874 -- # size=4096 00:22:12.168 10:36:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.168 10:36:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:12.168 10:36:05 -- common/autotest_common.sh@877 -- # return 0 00:22:12.168 10:36:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:12.168 10:36:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:12.168 10:36:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:12.428 /dev/nbd1 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:12.428 10:36:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:12.428 10:36:06 -- common/autotest_common.sh@857 -- # local i 00:22:12.428 10:36:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:12.428 10:36:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:12.428 10:36:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:12.428 10:36:06 -- common/autotest_common.sh@861 -- # break 00:22:12.428 10:36:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:12.428 10:36:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:12.428 10:36:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:12.428 1+0 records in 00:22:12.428 1+0 records out 00:22:12.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527965 s, 7.8 MB/s 00:22:12.428 10:36:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.428 10:36:06 -- common/autotest_common.sh@874 -- # size=4096 00:22:12.428 10:36:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.428 10:36:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:12.428 10:36:06 -- common/autotest_common.sh@877 -- # return 0 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:12.428 10:36:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:12.428 10:36:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@51 -- # local i 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:12.428 10:36:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@41 -- # break 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:12.687 10:36:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@41 -- # break 00:22:13.254 10:36:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:13.254 10:36:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:13.255 10:36:06 -- bdev/bdev_raid.sh@709 -- # killprocess 128108 00:22:13.255 10:36:06 -- common/autotest_common.sh@926 -- # '[' -z 128108 ']' 00:22:13.255 10:36:06 -- common/autotest_common.sh@930 -- # kill -0 128108 00:22:13.255 10:36:06 -- common/autotest_common.sh@931 -- # uname 00:22:13.255 10:36:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.255 10:36:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128108 00:22:13.255 10:36:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:13.255 10:36:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:13.255 10:36:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128108' 00:22:13.255 killing process with pid 128108 00:22:13.255 10:36:06 -- common/autotest_common.sh@945 -- # kill 128108 00:22:13.255 Received shutdown signal, test time was about 60.000000 seconds 00:22:13.255 00:22:13.255 Latency(us) 00:22:13.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.255 =================================================================================================================== 00:22:13.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.255 [2024-07-12 10:36:07.001137] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:13.255 10:36:06 -- common/autotest_common.sh@950 -- # wait 128108 00:22:13.513 [2024-07-12 10:36:07.312220] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:14.449 ************************************ 00:22:14.449 END TEST raid_rebuild_test 00:22:14.449 ************************************ 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:14.449 00:22:14.449 real 0m22.179s 00:22:14.449 user 0m30.859s 00:22:14.449 sys 0m3.894s 00:22:14.449 10:36:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.449 10:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:22:14.449 10:36:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:14.449 10:36:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:14.449 10:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:14.449 ************************************ 00:22:14.449 START TEST raid_rebuild_test_sb 00:22:14.449 ************************************ 00:22:14.449 10:36:08 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=128689 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128689 /var/tmp/spdk-raid.sock 00:22:14.449 10:36:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:14.449 10:36:08 -- common/autotest_common.sh@819 -- # '[' -z 128689 ']' 00:22:14.449 10:36:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:14.449 10:36:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:14.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:14.449 10:36:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:14.449 10:36:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:14.449 10:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:14.707 [2024-07-12 10:36:08.370595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:14.707 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:14.707 Zero copy mechanism will not be used. 00:22:14.707 [2024-07-12 10:36:08.370800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128689 ] 00:22:14.707 [2024-07-12 10:36:08.537110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.965 [2024-07-12 10:36:08.695122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.965 [2024-07-12 10:36:08.859153] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:15.530 10:36:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:15.530 10:36:09 -- common/autotest_common.sh@852 -- # return 0 00:22:15.530 10:36:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:15.530 10:36:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:15.530 10:36:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:15.789 BaseBdev1_malloc 00:22:15.789 10:36:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:15.789 [2024-07-12 10:36:09.662264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:15.789 [2024-07-12 10:36:09.662343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.789 [2024-07-12 10:36:09.662373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:15.789 [2024-07-12 10:36:09.662415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.789 [2024-07-12 10:36:09.664544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.789 [2024-07-12 10:36:09.664591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:15.789 BaseBdev1 00:22:15.789 10:36:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:15.789 10:36:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:15.789 10:36:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:16.046 BaseBdev2_malloc 00:22:16.046 10:36:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:16.305 [2024-07-12 10:36:10.062201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:16.305 [2024-07-12 10:36:10.062261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.305 [2024-07-12 10:36:10.062301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:16.305 [2024-07-12 10:36:10.062356] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.305 [2024-07-12 10:36:10.064418] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.305 [2024-07-12 10:36:10.064464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:16.305 BaseBdev2 00:22:16.305 10:36:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:16.305 10:36:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:16.305 10:36:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:16.563 BaseBdev3_malloc 00:22:16.563 10:36:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:16.563 [2024-07-12 10:36:10.447494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:16.563 [2024-07-12 10:36:10.447556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.563 [2024-07-12 10:36:10.447593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:16.563 [2024-07-12 10:36:10.447632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.563 [2024-07-12 10:36:10.449689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.563 [2024-07-12 10:36:10.449741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:16.563 BaseBdev3 00:22:16.563 10:36:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:16.563 10:36:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:16.563 10:36:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:16.821 BaseBdev4_malloc 00:22:16.821 10:36:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:17.079 [2024-07-12 10:36:10.816807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:17.079 [2024-07-12 10:36:10.816880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.079 [2024-07-12 10:36:10.816912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:17.079 [2024-07-12 10:36:10.816953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.080 [2024-07-12 10:36:10.819014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.080 [2024-07-12 10:36:10.819068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:17.080 BaseBdev4 00:22:17.080 10:36:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:17.338 spare_malloc 00:22:17.338 10:36:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:17.338 spare_delay 00:22:17.338 10:36:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:17.596 [2024-07-12 10:36:11.390485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:17.596 [2024-07-12 10:36:11.390548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.596 [2024-07-12 10:36:11.390577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:17.596 [2024-07-12 10:36:11.390616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.596 [2024-07-12 10:36:11.392684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.596 [2024-07-12 10:36:11.392745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:17.596 spare 00:22:17.596 10:36:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:17.855 [2024-07-12 10:36:11.594606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.855 [2024-07-12 10:36:11.596403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.855 [2024-07-12 10:36:11.596487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.855 [2024-07-12 10:36:11.596541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:17.855 [2024-07-12 10:36:11.596729] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:17.855 [2024-07-12 10:36:11.596749] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:17.855 [2024-07-12 10:36:11.596858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:17.855 [2024-07-12 10:36:11.597191] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:17.855 [2024-07-12 10:36:11.597212] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:17.855 [2024-07-12 10:36:11.597332] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.855 10:36:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.112 10:36:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.112 "name": "raid_bdev1", 00:22:18.112 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:18.112 "strip_size_kb": 0, 00:22:18.112 "state": "online", 00:22:18.112 "raid_level": "raid1", 00:22:18.112 "superblock": true, 00:22:18.112 "num_base_bdevs": 4, 00:22:18.112 "num_base_bdevs_discovered": 4, 00:22:18.112 "num_base_bdevs_operational": 4, 00:22:18.112 "base_bdevs_list": [ 00:22:18.112 { 00:22:18.112 "name": "BaseBdev1", 00:22:18.112 "uuid": "df2b20ee-b5c2-5bec-99c3-ade1e5b80a4a", 00:22:18.112 "is_configured": true, 00:22:18.112 "data_offset": 2048, 00:22:18.112 "data_size": 63488 00:22:18.112 }, 00:22:18.112 { 00:22:18.112 "name": "BaseBdev2", 00:22:18.112 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:18.112 "is_configured": true, 00:22:18.112 "data_offset": 2048, 00:22:18.112 "data_size": 63488 00:22:18.112 }, 00:22:18.112 { 00:22:18.112 "name": "BaseBdev3", 00:22:18.112 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:18.112 "is_configured": true, 00:22:18.112 "data_offset": 2048, 00:22:18.112 "data_size": 63488 00:22:18.112 }, 00:22:18.112 { 00:22:18.112 "name": "BaseBdev4", 00:22:18.112 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:18.112 "is_configured": true, 00:22:18.112 "data_offset": 2048, 00:22:18.112 "data_size": 63488 00:22:18.112 } 00:22:18.112 ] 00:22:18.112 }' 00:22:18.112 10:36:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.112 10:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.678 10:36:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:18.678 10:36:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:18.936 [2024-07-12 10:36:12.754913] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.936 10:36:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:18.936 10:36:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.936 10:36:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:19.206 10:36:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:19.206 10:36:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:19.206 10:36:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:19.206 10:36:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@12 -- # local i 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:19.206 10:36:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:19.480 [2024-07-12 10:36:13.194815] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:19.480 /dev/nbd0 00:22:19.480 10:36:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:19.480 10:36:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:19.480 10:36:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:19.480 10:36:13 -- common/autotest_common.sh@857 -- # local i 00:22:19.480 10:36:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:19.480 10:36:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:19.480 10:36:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:19.480 10:36:13 -- common/autotest_common.sh@861 -- # break 00:22:19.480 10:36:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:19.480 10:36:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:19.480 10:36:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.480 1+0 records in 00:22:19.480 1+0 records out 00:22:19.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264046 s, 15.5 MB/s 00:22:19.480 10:36:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.480 10:36:13 -- common/autotest_common.sh@874 -- # size=4096 00:22:19.480 10:36:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.480 10:36:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:19.480 10:36:13 -- common/autotest_common.sh@877 -- # return 0 00:22:19.480 10:36:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:19.480 10:36:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:19.480 10:36:13 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:19.480 10:36:13 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:19.480 10:36:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:26.038 63488+0 records in 00:22:26.038 63488+0 records out 00:22:26.038 32505856 bytes (33 MB, 31 MiB) copied, 5.9561 s, 5.5 MB/s 00:22:26.038 10:36:19 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@51 -- # local i 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:26.038 [2024-07-12 10:36:19.403401] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@41 -- # break 00:22:26.038 10:36:19 -- bdev/nbd_common.sh@45 -- # return 0 00:22:26.038 10:36:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:26.038 [2024-07-12 10:36:19.731127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:26.038 10:36:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.038 10:36:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.039 "name": "raid_bdev1", 00:22:26.039 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:26.039 "strip_size_kb": 0, 00:22:26.039 "state": "online", 00:22:26.039 "raid_level": "raid1", 00:22:26.039 "superblock": true, 00:22:26.039 "num_base_bdevs": 4, 00:22:26.039 "num_base_bdevs_discovered": 3, 00:22:26.039 "num_base_bdevs_operational": 3, 00:22:26.039 "base_bdevs_list": [ 00:22:26.039 { 00:22:26.039 "name": null, 00:22:26.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.039 "is_configured": false, 00:22:26.039 "data_offset": 2048, 00:22:26.039 "data_size": 63488 00:22:26.039 }, 00:22:26.039 { 00:22:26.039 "name": "BaseBdev2", 00:22:26.039 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:26.039 "is_configured": true, 00:22:26.039 "data_offset": 2048, 00:22:26.039 "data_size": 63488 00:22:26.039 }, 00:22:26.039 { 00:22:26.039 "name": "BaseBdev3", 00:22:26.039 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:26.039 "is_configured": true, 00:22:26.039 "data_offset": 2048, 00:22:26.039 "data_size": 63488 00:22:26.039 }, 00:22:26.039 { 00:22:26.039 "name": "BaseBdev4", 00:22:26.039 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:26.039 "is_configured": true, 00:22:26.039 "data_offset": 2048, 00:22:26.039 "data_size": 63488 00:22:26.039 } 00:22:26.039 ] 00:22:26.039 }' 00:22:26.039 10:36:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.039 10:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.975 10:36:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.975 [2024-07-12 10:36:20.835370] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:26.975 [2024-07-12 10:36:20.835425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.975 [2024-07-12 10:36:20.845536] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:22:26.975 [2024-07-12 10:36:20.847424] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.975 10:36:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.353 10:36:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.353 "name": "raid_bdev1", 00:22:28.353 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:28.353 "strip_size_kb": 0, 00:22:28.353 "state": "online", 00:22:28.353 "raid_level": "raid1", 00:22:28.353 "superblock": true, 00:22:28.353 "num_base_bdevs": 4, 00:22:28.353 "num_base_bdevs_discovered": 4, 00:22:28.353 "num_base_bdevs_operational": 4, 00:22:28.353 "process": { 00:22:28.353 "type": "rebuild", 00:22:28.353 "target": "spare", 00:22:28.353 "progress": { 00:22:28.353 "blocks": 24576, 00:22:28.353 "percent": 38 00:22:28.353 } 00:22:28.353 }, 00:22:28.353 "base_bdevs_list": [ 00:22:28.353 { 00:22:28.353 "name": "spare", 00:22:28.353 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:28.353 "is_configured": true, 00:22:28.353 "data_offset": 2048, 00:22:28.353 "data_size": 63488 00:22:28.353 }, 00:22:28.353 { 00:22:28.353 "name": "BaseBdev2", 00:22:28.353 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:28.353 "is_configured": true, 00:22:28.353 "data_offset": 2048, 00:22:28.353 "data_size": 63488 00:22:28.353 }, 00:22:28.353 { 00:22:28.353 "name": "BaseBdev3", 00:22:28.353 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:28.353 "is_configured": true, 00:22:28.353 "data_offset": 2048, 00:22:28.353 "data_size": 63488 00:22:28.353 }, 00:22:28.353 { 00:22:28.353 "name": "BaseBdev4", 00:22:28.353 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:28.353 "is_configured": true, 00:22:28.353 "data_offset": 2048, 00:22:28.353 "data_size": 63488 00:22:28.353 } 00:22:28.353 ] 00:22:28.353 }' 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.353 10:36:22 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:28.631 [2024-07-12 10:36:22.481758] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.889 [2024-07-12 10:36:22.555978] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.889 [2024-07-12 10:36:22.556054] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.889 10:36:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:28.889 10:36:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.889 10:36:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.890 10:36:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.147 10:36:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:29.147 "name": "raid_bdev1", 00:22:29.147 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:29.147 "strip_size_kb": 0, 00:22:29.147 "state": "online", 00:22:29.147 "raid_level": "raid1", 00:22:29.147 "superblock": true, 00:22:29.147 "num_base_bdevs": 4, 00:22:29.147 "num_base_bdevs_discovered": 3, 00:22:29.147 "num_base_bdevs_operational": 3, 00:22:29.147 "base_bdevs_list": [ 00:22:29.147 { 00:22:29.147 "name": null, 00:22:29.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.147 "is_configured": false, 00:22:29.147 "data_offset": 2048, 00:22:29.147 "data_size": 63488 00:22:29.147 }, 00:22:29.147 { 00:22:29.147 "name": "BaseBdev2", 00:22:29.147 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:29.147 "is_configured": true, 00:22:29.147 "data_offset": 2048, 00:22:29.147 "data_size": 63488 00:22:29.147 }, 00:22:29.147 { 00:22:29.147 "name": "BaseBdev3", 00:22:29.147 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:29.147 "is_configured": true, 00:22:29.147 "data_offset": 2048, 00:22:29.147 "data_size": 63488 00:22:29.147 }, 00:22:29.147 { 00:22:29.147 "name": "BaseBdev4", 00:22:29.147 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:29.147 "is_configured": true, 00:22:29.147 "data_offset": 2048, 00:22:29.147 "data_size": 63488 00:22:29.147 } 00:22:29.147 ] 00:22:29.147 }' 00:22:29.147 10:36:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:29.147 10:36:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.711 10:36:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.969 "name": "raid_bdev1", 00:22:29.969 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:29.969 "strip_size_kb": 0, 00:22:29.969 "state": "online", 00:22:29.969 "raid_level": "raid1", 00:22:29.969 "superblock": true, 00:22:29.969 "num_base_bdevs": 4, 00:22:29.969 "num_base_bdevs_discovered": 3, 00:22:29.969 "num_base_bdevs_operational": 3, 00:22:29.969 "base_bdevs_list": [ 00:22:29.969 { 00:22:29.969 "name": null, 00:22:29.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.969 "is_configured": false, 00:22:29.969 "data_offset": 2048, 00:22:29.969 "data_size": 63488 00:22:29.969 }, 00:22:29.969 { 00:22:29.969 "name": "BaseBdev2", 00:22:29.969 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:29.969 "is_configured": true, 00:22:29.969 "data_offset": 2048, 00:22:29.969 "data_size": 63488 00:22:29.969 }, 00:22:29.969 { 00:22:29.969 "name": "BaseBdev3", 00:22:29.969 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:29.969 "is_configured": true, 00:22:29.969 "data_offset": 2048, 00:22:29.969 "data_size": 63488 00:22:29.969 }, 00:22:29.969 { 00:22:29.969 "name": "BaseBdev4", 00:22:29.969 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:29.969 "is_configured": true, 00:22:29.969 "data_offset": 2048, 00:22:29.969 "data_size": 63488 00:22:29.969 } 00:22:29.969 ] 00:22:29.969 }' 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:29.969 10:36:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:30.227 [2024-07-12 10:36:24.082015] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:30.227 [2024-07-12 10:36:24.082053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.227 [2024-07-12 10:36:24.091402] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:22:30.227 [2024-07-12 10:36:24.092918] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:30.227 10:36:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.211 10:36:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.469 10:36:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.469 "name": "raid_bdev1", 00:22:31.469 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:31.469 "strip_size_kb": 0, 00:22:31.469 "state": "online", 00:22:31.469 "raid_level": "raid1", 00:22:31.469 "superblock": true, 00:22:31.469 "num_base_bdevs": 4, 00:22:31.469 "num_base_bdevs_discovered": 4, 00:22:31.469 "num_base_bdevs_operational": 4, 00:22:31.469 "process": { 00:22:31.469 "type": "rebuild", 00:22:31.469 "target": "spare", 00:22:31.469 "progress": { 00:22:31.469 "blocks": 24576, 00:22:31.469 "percent": 38 00:22:31.469 } 00:22:31.469 }, 00:22:31.469 "base_bdevs_list": [ 00:22:31.469 { 00:22:31.469 "name": "spare", 00:22:31.469 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:31.469 "is_configured": true, 00:22:31.469 "data_offset": 2048, 00:22:31.469 "data_size": 63488 00:22:31.469 }, 00:22:31.469 { 00:22:31.469 "name": "BaseBdev2", 00:22:31.469 "uuid": "a03b9a1b-dfda-5b6b-885c-ad82e1f6329e", 00:22:31.469 "is_configured": true, 00:22:31.469 "data_offset": 2048, 00:22:31.469 "data_size": 63488 00:22:31.469 }, 00:22:31.469 { 00:22:31.469 "name": "BaseBdev3", 00:22:31.469 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:31.469 "is_configured": true, 00:22:31.469 "data_offset": 2048, 00:22:31.469 "data_size": 63488 00:22:31.469 }, 00:22:31.469 { 00:22:31.469 "name": "BaseBdev4", 00:22:31.469 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:31.469 "is_configured": true, 00:22:31.469 "data_offset": 2048, 00:22:31.469 "data_size": 63488 00:22:31.469 } 00:22:31.469 ] 00:22:31.469 }' 00:22:31.469 10:36:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:31.726 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:31.726 10:36:25 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:31.984 [2024-07-12 10:36:25.692391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:31.984 [2024-07-12 10:36:25.700774] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.984 10:36:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.242 10:36:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.242 "name": "raid_bdev1", 00:22:32.242 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:32.242 "strip_size_kb": 0, 00:22:32.242 "state": "online", 00:22:32.242 "raid_level": "raid1", 00:22:32.242 "superblock": true, 00:22:32.242 "num_base_bdevs": 4, 00:22:32.242 "num_base_bdevs_discovered": 3, 00:22:32.242 "num_base_bdevs_operational": 3, 00:22:32.242 "process": { 00:22:32.242 "type": "rebuild", 00:22:32.242 "target": "spare", 00:22:32.242 "progress": { 00:22:32.242 "blocks": 38912, 00:22:32.242 "percent": 61 00:22:32.242 } 00:22:32.242 }, 00:22:32.242 "base_bdevs_list": [ 00:22:32.242 { 00:22:32.242 "name": "spare", 00:22:32.242 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:32.242 "is_configured": true, 00:22:32.242 "data_offset": 2048, 00:22:32.242 "data_size": 63488 00:22:32.242 }, 00:22:32.242 { 00:22:32.242 "name": null, 00:22:32.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.242 "is_configured": false, 00:22:32.242 "data_offset": 2048, 00:22:32.242 "data_size": 63488 00:22:32.242 }, 00:22:32.242 { 00:22:32.242 "name": "BaseBdev3", 00:22:32.242 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:32.242 "is_configured": true, 00:22:32.242 "data_offset": 2048, 00:22:32.242 "data_size": 63488 00:22:32.242 }, 00:22:32.242 { 00:22:32.242 "name": "BaseBdev4", 00:22:32.242 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:32.242 "is_configured": true, 00:22:32.242 "data_offset": 2048, 00:22:32.242 "data_size": 63488 00:22:32.242 } 00:22:32.242 ] 00:22:32.242 }' 00:22:32.242 10:36:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.242 10:36:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.242 10:36:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@657 -- # local timeout=499 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.499 "name": "raid_bdev1", 00:22:32.499 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:32.499 "strip_size_kb": 0, 00:22:32.499 "state": "online", 00:22:32.499 "raid_level": "raid1", 00:22:32.499 "superblock": true, 00:22:32.499 "num_base_bdevs": 4, 00:22:32.499 "num_base_bdevs_discovered": 3, 00:22:32.499 "num_base_bdevs_operational": 3, 00:22:32.499 "process": { 00:22:32.499 "type": "rebuild", 00:22:32.499 "target": "spare", 00:22:32.499 "progress": { 00:22:32.499 "blocks": 45056, 00:22:32.499 "percent": 70 00:22:32.499 } 00:22:32.499 }, 00:22:32.499 "base_bdevs_list": [ 00:22:32.499 { 00:22:32.499 "name": "spare", 00:22:32.499 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:32.499 "is_configured": true, 00:22:32.499 "data_offset": 2048, 00:22:32.499 "data_size": 63488 00:22:32.499 }, 00:22:32.499 { 00:22:32.499 "name": null, 00:22:32.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.499 "is_configured": false, 00:22:32.499 "data_offset": 2048, 00:22:32.499 "data_size": 63488 00:22:32.499 }, 00:22:32.499 { 00:22:32.499 "name": "BaseBdev3", 00:22:32.499 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:32.499 "is_configured": true, 00:22:32.499 "data_offset": 2048, 00:22:32.499 "data_size": 63488 00:22:32.499 }, 00:22:32.499 { 00:22:32.499 "name": "BaseBdev4", 00:22:32.499 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:32.499 "is_configured": true, 00:22:32.499 "data_offset": 2048, 00:22:32.499 "data_size": 63488 00:22:32.499 } 00:22:32.499 ] 00:22:32.499 }' 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.499 10:36:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.757 10:36:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.757 10:36:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.322 [2024-07-12 10:36:27.208344] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:33.322 [2024-07-12 10:36:27.208409] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:33.322 [2024-07-12 10:36:27.208537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.581 10:36:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.839 10:36:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.839 "name": "raid_bdev1", 00:22:33.839 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:33.839 "strip_size_kb": 0, 00:22:33.839 "state": "online", 00:22:33.839 "raid_level": "raid1", 00:22:33.839 "superblock": true, 00:22:33.839 "num_base_bdevs": 4, 00:22:33.839 "num_base_bdevs_discovered": 3, 00:22:33.839 "num_base_bdevs_operational": 3, 00:22:33.839 "base_bdevs_list": [ 00:22:33.839 { 00:22:33.839 "name": "spare", 00:22:33.839 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:33.839 "is_configured": true, 00:22:33.839 "data_offset": 2048, 00:22:33.839 "data_size": 63488 00:22:33.839 }, 00:22:33.839 { 00:22:33.839 "name": null, 00:22:33.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.839 "is_configured": false, 00:22:33.839 "data_offset": 2048, 00:22:33.839 "data_size": 63488 00:22:33.839 }, 00:22:33.839 { 00:22:33.839 "name": "BaseBdev3", 00:22:33.839 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:33.839 "is_configured": true, 00:22:33.839 "data_offset": 2048, 00:22:33.839 "data_size": 63488 00:22:33.839 }, 00:22:33.839 { 00:22:33.839 "name": "BaseBdev4", 00:22:33.839 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:33.839 "is_configured": true, 00:22:33.839 "data_offset": 2048, 00:22:33.839 "data_size": 63488 00:22:33.839 } 00:22:33.839 ] 00:22:33.839 }' 00:22:33.839 10:36:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.839 10:36:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:33.839 10:36:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.097 10:36:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.355 "name": "raid_bdev1", 00:22:34.355 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:34.355 "strip_size_kb": 0, 00:22:34.355 "state": "online", 00:22:34.355 "raid_level": "raid1", 00:22:34.355 "superblock": true, 00:22:34.355 "num_base_bdevs": 4, 00:22:34.355 "num_base_bdevs_discovered": 3, 00:22:34.355 "num_base_bdevs_operational": 3, 00:22:34.355 "base_bdevs_list": [ 00:22:34.355 { 00:22:34.355 "name": "spare", 00:22:34.355 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:34.355 "is_configured": true, 00:22:34.355 "data_offset": 2048, 00:22:34.355 "data_size": 63488 00:22:34.355 }, 00:22:34.355 { 00:22:34.355 "name": null, 00:22:34.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.355 "is_configured": false, 00:22:34.355 "data_offset": 2048, 00:22:34.355 "data_size": 63488 00:22:34.355 }, 00:22:34.355 { 00:22:34.355 "name": "BaseBdev3", 00:22:34.355 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:34.355 "is_configured": true, 00:22:34.355 "data_offset": 2048, 00:22:34.355 "data_size": 63488 00:22:34.355 }, 00:22:34.355 { 00:22:34.355 "name": "BaseBdev4", 00:22:34.355 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:34.355 "is_configured": true, 00:22:34.355 "data_offset": 2048, 00:22:34.355 "data_size": 63488 00:22:34.355 } 00:22:34.355 ] 00:22:34.355 }' 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.355 10:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.613 10:36:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.614 "name": "raid_bdev1", 00:22:34.614 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:34.614 "strip_size_kb": 0, 00:22:34.614 "state": "online", 00:22:34.614 "raid_level": "raid1", 00:22:34.614 "superblock": true, 00:22:34.614 "num_base_bdevs": 4, 00:22:34.614 "num_base_bdevs_discovered": 3, 00:22:34.614 "num_base_bdevs_operational": 3, 00:22:34.614 "base_bdevs_list": [ 00:22:34.614 { 00:22:34.614 "name": "spare", 00:22:34.614 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:34.614 "is_configured": true, 00:22:34.614 "data_offset": 2048, 00:22:34.614 "data_size": 63488 00:22:34.614 }, 00:22:34.614 { 00:22:34.614 "name": null, 00:22:34.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.614 "is_configured": false, 00:22:34.614 "data_offset": 2048, 00:22:34.614 "data_size": 63488 00:22:34.614 }, 00:22:34.614 { 00:22:34.614 "name": "BaseBdev3", 00:22:34.614 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:34.614 "is_configured": true, 00:22:34.614 "data_offset": 2048, 00:22:34.614 "data_size": 63488 00:22:34.614 }, 00:22:34.614 { 00:22:34.614 "name": "BaseBdev4", 00:22:34.614 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:34.614 "is_configured": true, 00:22:34.614 "data_offset": 2048, 00:22:34.614 "data_size": 63488 00:22:34.614 } 00:22:34.614 ] 00:22:34.614 }' 00:22:34.614 10:36:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.614 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.179 10:36:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.438 [2024-07-12 10:36:29.237273] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.438 [2024-07-12 10:36:29.237299] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.438 [2024-07-12 10:36:29.237370] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.438 [2024-07-12 10:36:29.237450] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.438 [2024-07-12 10:36:29.237463] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:35.438 10:36:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.438 10:36:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:35.696 10:36:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:35.696 10:36:29 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:35.696 10:36:29 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.696 10:36:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:35.956 /dev/nbd0 00:22:35.956 10:36:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.956 10:36:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.956 10:36:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:35.956 10:36:29 -- common/autotest_common.sh@857 -- # local i 00:22:35.956 10:36:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:35.956 10:36:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:35.956 10:36:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:35.956 10:36:29 -- common/autotest_common.sh@861 -- # break 00:22:35.956 10:36:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:35.956 10:36:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:35.956 10:36:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.956 1+0 records in 00:22:35.956 1+0 records out 00:22:35.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354364 s, 11.6 MB/s 00:22:35.956 10:36:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.956 10:36:29 -- common/autotest_common.sh@874 -- # size=4096 00:22:35.956 10:36:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.956 10:36:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:35.956 10:36:29 -- common/autotest_common.sh@877 -- # return 0 00:22:35.956 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.956 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.956 10:36:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:36.215 /dev/nbd1 00:22:36.215 10:36:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.215 10:36:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.215 10:36:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:36.215 10:36:29 -- common/autotest_common.sh@857 -- # local i 00:22:36.215 10:36:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.215 10:36:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.215 10:36:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:36.215 10:36:29 -- common/autotest_common.sh@861 -- # break 00:22:36.215 10:36:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.215 10:36:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.215 10:36:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.215 1+0 records in 00:22:36.215 1+0 records out 00:22:36.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450008 s, 9.1 MB/s 00:22:36.215 10:36:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.215 10:36:29 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.215 10:36:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.215 10:36:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.215 10:36:29 -- common/autotest_common.sh@877 -- # return 0 00:22:36.215 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.215 10:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.215 10:36:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.215 10:36:30 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.215 10:36:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.473 10:36:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:36.731 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:36.731 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@41 -- # break 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.732 10:36:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:36.990 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:36.990 10:36:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.990 10:36:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.990 10:36:30 -- bdev/nbd_common.sh@41 -- # break 00:22:36.990 10:36:30 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.990 10:36:30 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:36.990 10:36:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:36.990 10:36:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:36.990 10:36:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:37.249 10:36:30 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:37.507 [2024-07-12 10:36:31.234703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:37.507 [2024-07-12 10:36:31.234774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.507 [2024-07-12 10:36:31.234812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:37.507 [2024-07-12 10:36:31.234831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.508 [2024-07-12 10:36:31.236989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.508 [2024-07-12 10:36:31.237049] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.508 [2024-07-12 10:36:31.237145] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:37.508 [2024-07-12 10:36:31.237200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.508 BaseBdev1 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@696 -- # continue 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:37.508 10:36:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:37.766 10:36:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:37.766 [2024-07-12 10:36:31.590769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:37.766 [2024-07-12 10:36:31.590829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.766 [2024-07-12 10:36:31.590862] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:37.766 [2024-07-12 10:36:31.590880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.766 [2024-07-12 10:36:31.591235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.766 [2024-07-12 10:36:31.591292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:37.766 [2024-07-12 10:36:31.591385] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:37.766 [2024-07-12 10:36:31.591399] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:37.766 [2024-07-12 10:36:31.591406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.766 [2024-07-12 10:36:31.591427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:37.766 [2024-07-12 10:36:31.591487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.766 BaseBdev3 00:22:37.766 10:36:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.766 10:36:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:37.766 10:36:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:38.027 10:36:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:38.285 [2024-07-12 10:36:31.966823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:38.285 [2024-07-12 10:36:31.966885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.285 [2024-07-12 10:36:31.966915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:38.285 [2024-07-12 10:36:31.966938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.285 [2024-07-12 10:36:31.967294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.285 [2024-07-12 10:36:31.967364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:38.285 [2024-07-12 10:36:31.967442] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:38.285 [2024-07-12 10:36:31.967465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:38.285 BaseBdev4 00:22:38.285 10:36:31 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:38.285 10:36:32 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:38.543 [2024-07-12 10:36:32.398889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.543 [2024-07-12 10:36:32.398950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.543 [2024-07-12 10:36:32.398980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:38.543 [2024-07-12 10:36:32.399004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.543 [2024-07-12 10:36:32.399386] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.543 [2024-07-12 10:36:32.399443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:38.543 [2024-07-12 10:36:32.399533] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:38.543 [2024-07-12 10:36:32.399565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.543 spare 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.543 10:36:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.801 [2024-07-12 10:36:32.499657] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:38.801 [2024-07-12 10:36:32.499679] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:38.801 [2024-07-12 10:36:32.499782] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:22:38.801 [2024-07-12 10:36:32.500207] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:38.801 [2024-07-12 10:36:32.500244] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:38.801 [2024-07-12 10:36:32.500382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.801 10:36:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.801 "name": "raid_bdev1", 00:22:38.801 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:38.801 "strip_size_kb": 0, 00:22:38.801 "state": "online", 00:22:38.801 "raid_level": "raid1", 00:22:38.801 "superblock": true, 00:22:38.801 "num_base_bdevs": 4, 00:22:38.801 "num_base_bdevs_discovered": 3, 00:22:38.801 "num_base_bdevs_operational": 3, 00:22:38.801 "base_bdevs_list": [ 00:22:38.801 { 00:22:38.801 "name": "spare", 00:22:38.801 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:38.801 "is_configured": true, 00:22:38.801 "data_offset": 2048, 00:22:38.801 "data_size": 63488 00:22:38.801 }, 00:22:38.801 { 00:22:38.801 "name": null, 00:22:38.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.801 "is_configured": false, 00:22:38.801 "data_offset": 2048, 00:22:38.801 "data_size": 63488 00:22:38.801 }, 00:22:38.801 { 00:22:38.801 "name": "BaseBdev3", 00:22:38.801 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:38.801 "is_configured": true, 00:22:38.801 "data_offset": 2048, 00:22:38.801 "data_size": 63488 00:22:38.801 }, 00:22:38.801 { 00:22:38.801 "name": "BaseBdev4", 00:22:38.801 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:38.801 "is_configured": true, 00:22:38.801 "data_offset": 2048, 00:22:38.801 "data_size": 63488 00:22:38.801 } 00:22:38.801 ] 00:22:38.801 }' 00:22:38.801 10:36:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.801 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.369 10:36:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.627 10:36:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.627 "name": "raid_bdev1", 00:22:39.627 "uuid": "4ae17dc3-e5b0-4fb5-8fb1-5bca5653271f", 00:22:39.627 "strip_size_kb": 0, 00:22:39.627 "state": "online", 00:22:39.627 "raid_level": "raid1", 00:22:39.627 "superblock": true, 00:22:39.627 "num_base_bdevs": 4, 00:22:39.627 "num_base_bdevs_discovered": 3, 00:22:39.627 "num_base_bdevs_operational": 3, 00:22:39.627 "base_bdevs_list": [ 00:22:39.627 { 00:22:39.627 "name": "spare", 00:22:39.627 "uuid": "919c68d4-20fd-5819-b2e7-3d9463d3b7cb", 00:22:39.627 "is_configured": true, 00:22:39.627 "data_offset": 2048, 00:22:39.627 "data_size": 63488 00:22:39.627 }, 00:22:39.627 { 00:22:39.627 "name": null, 00:22:39.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.627 "is_configured": false, 00:22:39.627 "data_offset": 2048, 00:22:39.627 "data_size": 63488 00:22:39.627 }, 00:22:39.627 { 00:22:39.627 "name": "BaseBdev3", 00:22:39.627 "uuid": "546f7d08-0165-59cc-bdbf-c1367346d8b6", 00:22:39.627 "is_configured": true, 00:22:39.627 "data_offset": 2048, 00:22:39.627 "data_size": 63488 00:22:39.627 }, 00:22:39.627 { 00:22:39.627 "name": "BaseBdev4", 00:22:39.627 "uuid": "30a842c4-bde2-5a0b-a144-198a7712a9fd", 00:22:39.627 "is_configured": true, 00:22:39.627 "data_offset": 2048, 00:22:39.627 "data_size": 63488 00:22:39.627 } 00:22:39.627 ] 00:22:39.627 }' 00:22:39.627 10:36:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.885 10:36:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:39.885 10:36:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.885 10:36:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:39.885 10:36:33 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.885 10:36:33 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.143 10:36:33 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.143 10:36:33 -- bdev/bdev_raid.sh@709 -- # killprocess 128689 00:22:40.143 10:36:33 -- common/autotest_common.sh@926 -- # '[' -z 128689 ']' 00:22:40.143 10:36:33 -- common/autotest_common.sh@930 -- # kill -0 128689 00:22:40.143 10:36:33 -- common/autotest_common.sh@931 -- # uname 00:22:40.143 10:36:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.143 10:36:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128689 00:22:40.143 killing process with pid 128689 00:22:40.143 Received shutdown signal, test time was about 60.000000 seconds 00:22:40.143 00:22:40.143 Latency(us) 00:22:40.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.143 =================================================================================================================== 00:22:40.143 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:40.143 10:36:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:40.143 10:36:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:40.143 10:36:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128689' 00:22:40.143 10:36:33 -- common/autotest_common.sh@945 -- # kill 128689 00:22:40.143 10:36:33 -- common/autotest_common.sh@950 -- # wait 128689 00:22:40.143 [2024-07-12 10:36:33.831927] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.143 [2024-07-12 10:36:33.831988] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.143 [2024-07-12 10:36:33.832055] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.143 [2024-07-12 10:36:33.832067] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:40.401 [2024-07-12 10:36:34.144295] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.336 ************************************ 00:22:41.336 END TEST raid_rebuild_test_sb 00:22:41.336 ************************************ 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:41.336 00:22:41.336 real 0m26.767s 00:22:41.336 user 0m39.415s 00:22:41.336 sys 0m3.858s 00:22:41.336 10:36:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.336 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:41.336 10:36:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:41.336 10:36:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:41.336 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:41.336 ************************************ 00:22:41.336 START TEST raid_rebuild_test_io 00:22:41.336 ************************************ 00:22:41.336 10:36:35 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=129395 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129395 /var/tmp/spdk-raid.sock 00:22:41.336 10:36:35 -- common/autotest_common.sh@819 -- # '[' -z 129395 ']' 00:22:41.336 10:36:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:41.336 10:36:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:41.336 10:36:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:41.336 10:36:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:41.337 10:36:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.337 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:41.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:41.337 Zero copy mechanism will not be used. 00:22:41.337 [2024-07-12 10:36:35.198061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:41.337 [2024-07-12 10:36:35.198252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129395 ] 00:22:41.595 [2024-07-12 10:36:35.367750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.853 [2024-07-12 10:36:35.525375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.853 [2024-07-12 10:36:35.689935] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.419 10:36:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.419 10:36:36 -- common/autotest_common.sh@852 -- # return 0 00:22:42.419 10:36:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.419 10:36:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:42.419 10:36:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:42.677 BaseBdev1 00:22:42.677 10:36:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.677 10:36:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:42.677 10:36:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:42.935 BaseBdev2 00:22:42.935 10:36:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.935 10:36:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:42.935 10:36:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:43.193 BaseBdev3 00:22:43.193 10:36:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:43.193 10:36:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:43.193 10:36:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:43.193 BaseBdev4 00:22:43.193 10:36:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:43.452 spare_malloc 00:22:43.452 10:36:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:43.711 spare_delay 00:22:43.711 10:36:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:43.969 [2024-07-12 10:36:37.642350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:43.969 [2024-07-12 10:36:37.642423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.969 [2024-07-12 10:36:37.642460] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:43.969 [2024-07-12 10:36:37.642504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.969 [2024-07-12 10:36:37.644679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.969 [2024-07-12 10:36:37.644722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:43.969 spare 00:22:43.969 10:36:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:43.969 [2024-07-12 10:36:37.866441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.969 [2024-07-12 10:36:37.868352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.970 [2024-07-12 10:36:37.868404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.970 [2024-07-12 10:36:37.868439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.970 [2024-07-12 10:36:37.868507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:43.970 [2024-07-12 10:36:37.868518] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:43.970 [2024-07-12 10:36:37.868689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:43.970 [2024-07-12 10:36:37.869032] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:43.970 [2024-07-12 10:36:37.869047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:43.970 [2024-07-12 10:36:37.869202] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.970 10:36:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.228 10:36:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.228 "name": "raid_bdev1", 00:22:44.228 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:44.228 "strip_size_kb": 0, 00:22:44.228 "state": "online", 00:22:44.228 "raid_level": "raid1", 00:22:44.228 "superblock": false, 00:22:44.228 "num_base_bdevs": 4, 00:22:44.228 "num_base_bdevs_discovered": 4, 00:22:44.228 "num_base_bdevs_operational": 4, 00:22:44.228 "base_bdevs_list": [ 00:22:44.228 { 00:22:44.228 "name": "BaseBdev1", 00:22:44.228 "uuid": "ee88af68-8faf-4d0f-8324-e8cd11c46138", 00:22:44.228 "is_configured": true, 00:22:44.228 "data_offset": 0, 00:22:44.228 "data_size": 65536 00:22:44.228 }, 00:22:44.228 { 00:22:44.228 "name": "BaseBdev2", 00:22:44.228 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:44.228 "is_configured": true, 00:22:44.228 "data_offset": 0, 00:22:44.228 "data_size": 65536 00:22:44.228 }, 00:22:44.228 { 00:22:44.228 "name": "BaseBdev3", 00:22:44.228 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:44.228 "is_configured": true, 00:22:44.228 "data_offset": 0, 00:22:44.228 "data_size": 65536 00:22:44.228 }, 00:22:44.228 { 00:22:44.228 "name": "BaseBdev4", 00:22:44.228 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:44.228 "is_configured": true, 00:22:44.228 "data_offset": 0, 00:22:44.228 "data_size": 65536 00:22:44.228 } 00:22:44.228 ] 00:22:44.228 }' 00:22:44.228 10:36:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.228 10:36:38 -- common/autotest_common.sh@10 -- # set +x 00:22:45.163 10:36:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:45.163 10:36:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:45.163 [2024-07-12 10:36:38.994794] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.163 10:36:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:45.163 10:36:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.163 10:36:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:45.422 10:36:39 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:45.422 10:36:39 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:45.422 10:36:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:45.422 10:36:39 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:45.680 [2024-07-12 10:36:39.344874] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:45.680 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:45.680 Zero copy mechanism will not be used. 00:22:45.680 Running I/O for 60 seconds... 00:22:45.680 [2024-07-12 10:36:39.410732] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:45.680 [2024-07-12 10:36:39.410944] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.680 10:36:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.681 10:36:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.681 10:36:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.940 10:36:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.940 "name": "raid_bdev1", 00:22:45.940 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:45.940 "strip_size_kb": 0, 00:22:45.940 "state": "online", 00:22:45.940 "raid_level": "raid1", 00:22:45.940 "superblock": false, 00:22:45.940 "num_base_bdevs": 4, 00:22:45.940 "num_base_bdevs_discovered": 3, 00:22:45.940 "num_base_bdevs_operational": 3, 00:22:45.940 "base_bdevs_list": [ 00:22:45.940 { 00:22:45.940 "name": null, 00:22:45.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.940 "is_configured": false, 00:22:45.940 "data_offset": 0, 00:22:45.940 "data_size": 65536 00:22:45.940 }, 00:22:45.940 { 00:22:45.940 "name": "BaseBdev2", 00:22:45.940 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:45.940 "is_configured": true, 00:22:45.940 "data_offset": 0, 00:22:45.940 "data_size": 65536 00:22:45.940 }, 00:22:45.940 { 00:22:45.940 "name": "BaseBdev3", 00:22:45.940 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:45.940 "is_configured": true, 00:22:45.940 "data_offset": 0, 00:22:45.940 "data_size": 65536 00:22:45.940 }, 00:22:45.940 { 00:22:45.940 "name": "BaseBdev4", 00:22:45.940 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:45.940 "is_configured": true, 00:22:45.940 "data_offset": 0, 00:22:45.940 "data_size": 65536 00:22:45.940 } 00:22:45.940 ] 00:22:45.940 }' 00:22:45.940 10:36:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.940 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.508 10:36:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.767 [2024-07-12 10:36:40.618081] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:46.767 [2024-07-12 10:36:40.618146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.767 10:36:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:46.767 [2024-07-12 10:36:40.674261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:46.767 [2024-07-12 10:36:40.676066] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:47.025 [2024-07-12 10:36:40.783817] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:47.025 [2024-07-12 10:36:40.901387] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:47.025 [2024-07-12 10:36:40.901692] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:47.284 [2024-07-12 10:36:41.142733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:47.284 [2024-07-12 10:36:41.143908] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:47.542 [2024-07-12 10:36:41.360242] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:47.542 [2024-07-12 10:36:41.360874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.801 10:36:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.801 [2024-07-12 10:36:41.694462] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:47.801 [2024-07-12 10:36:41.694984] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:48.060 [2024-07-12 10:36:41.814039] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:48.060 10:36:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.060 "name": "raid_bdev1", 00:22:48.060 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:48.060 "strip_size_kb": 0, 00:22:48.060 "state": "online", 00:22:48.060 "raid_level": "raid1", 00:22:48.060 "superblock": false, 00:22:48.060 "num_base_bdevs": 4, 00:22:48.060 "num_base_bdevs_discovered": 4, 00:22:48.060 "num_base_bdevs_operational": 4, 00:22:48.060 "process": { 00:22:48.060 "type": "rebuild", 00:22:48.060 "target": "spare", 00:22:48.060 "progress": { 00:22:48.060 "blocks": 16384, 00:22:48.060 "percent": 25 00:22:48.060 } 00:22:48.060 }, 00:22:48.060 "base_bdevs_list": [ 00:22:48.060 { 00:22:48.060 "name": "spare", 00:22:48.060 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:48.060 "is_configured": true, 00:22:48.060 "data_offset": 0, 00:22:48.060 "data_size": 65536 00:22:48.060 }, 00:22:48.060 { 00:22:48.060 "name": "BaseBdev2", 00:22:48.060 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:48.060 "is_configured": true, 00:22:48.060 "data_offset": 0, 00:22:48.060 "data_size": 65536 00:22:48.060 }, 00:22:48.060 { 00:22:48.060 "name": "BaseBdev3", 00:22:48.060 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:48.060 "is_configured": true, 00:22:48.060 "data_offset": 0, 00:22:48.060 "data_size": 65536 00:22:48.060 }, 00:22:48.060 { 00:22:48.060 "name": "BaseBdev4", 00:22:48.060 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:48.060 "is_configured": true, 00:22:48.060 "data_offset": 0, 00:22:48.060 "data_size": 65536 00:22:48.060 } 00:22:48.060 ] 00:22:48.060 }' 00:22:48.060 10:36:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.060 10:36:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.060 10:36:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.319 10:36:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.319 10:36:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:48.319 [2024-07-12 10:36:42.034407] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:48.319 [2024-07-12 10:36:42.171423] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:48.319 [2024-07-12 10:36:42.189792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:48.577 [2024-07-12 10:36:42.402471] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:48.577 [2024-07-12 10:36:42.421090] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.577 [2024-07-12 10:36:42.447686] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.577 10:36:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.836 10:36:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.836 "name": "raid_bdev1", 00:22:48.836 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:48.836 "strip_size_kb": 0, 00:22:48.836 "state": "online", 00:22:48.836 "raid_level": "raid1", 00:22:48.836 "superblock": false, 00:22:48.836 "num_base_bdevs": 4, 00:22:48.836 "num_base_bdevs_discovered": 3, 00:22:48.836 "num_base_bdevs_operational": 3, 00:22:48.836 "base_bdevs_list": [ 00:22:48.836 { 00:22:48.836 "name": null, 00:22:48.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.836 "is_configured": false, 00:22:48.836 "data_offset": 0, 00:22:48.836 "data_size": 65536 00:22:48.836 }, 00:22:48.836 { 00:22:48.836 "name": "BaseBdev2", 00:22:48.836 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:48.836 "is_configured": true, 00:22:48.836 "data_offset": 0, 00:22:48.836 "data_size": 65536 00:22:48.836 }, 00:22:48.836 { 00:22:48.836 "name": "BaseBdev3", 00:22:48.836 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:48.836 "is_configured": true, 00:22:48.836 "data_offset": 0, 00:22:48.836 "data_size": 65536 00:22:48.836 }, 00:22:48.836 { 00:22:48.836 "name": "BaseBdev4", 00:22:48.836 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:48.836 "is_configured": true, 00:22:48.836 "data_offset": 0, 00:22:48.836 "data_size": 65536 00:22:48.836 } 00:22:48.836 ] 00:22:48.836 }' 00:22:48.836 10:36:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.836 10:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.769 10:36:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.769 "name": "raid_bdev1", 00:22:49.769 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:49.769 "strip_size_kb": 0, 00:22:49.769 "state": "online", 00:22:49.769 "raid_level": "raid1", 00:22:49.770 "superblock": false, 00:22:49.770 "num_base_bdevs": 4, 00:22:49.770 "num_base_bdevs_discovered": 3, 00:22:49.770 "num_base_bdevs_operational": 3, 00:22:49.770 "base_bdevs_list": [ 00:22:49.770 { 00:22:49.770 "name": null, 00:22:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.770 "is_configured": false, 00:22:49.770 "data_offset": 0, 00:22:49.770 "data_size": 65536 00:22:49.770 }, 00:22:49.770 { 00:22:49.770 "name": "BaseBdev2", 00:22:49.770 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:49.770 "is_configured": true, 00:22:49.770 "data_offset": 0, 00:22:49.770 "data_size": 65536 00:22:49.770 }, 00:22:49.770 { 00:22:49.770 "name": "BaseBdev3", 00:22:49.770 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:49.770 "is_configured": true, 00:22:49.770 "data_offset": 0, 00:22:49.770 "data_size": 65536 00:22:49.770 }, 00:22:49.770 { 00:22:49.770 "name": "BaseBdev4", 00:22:49.770 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:49.770 "is_configured": true, 00:22:49.770 "data_offset": 0, 00:22:49.770 "data_size": 65536 00:22:49.770 } 00:22:49.770 ] 00:22:49.770 }' 00:22:49.770 10:36:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:50.027 10:36:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:50.027 10:36:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:50.027 10:36:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:50.027 10:36:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:50.285 [2024-07-12 10:36:44.015761] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:50.285 [2024-07-12 10:36:44.015824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.285 10:36:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:50.285 [2024-07-12 10:36:44.063124] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:50.285 [2024-07-12 10:36:44.064759] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:50.285 [2024-07-12 10:36:44.187688] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:50.285 [2024-07-12 10:36:44.188823] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:50.542 [2024-07-12 10:36:44.407701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:50.542 [2024-07-12 10:36:44.407952] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:51.109 [2024-07-12 10:36:44.743881] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:51.109 [2024-07-12 10:36:44.744328] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:51.109 [2024-07-12 10:36:44.859010] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:51.109 [2024-07-12 10:36:44.859543] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.367 10:36:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.626 "name": "raid_bdev1", 00:22:51.626 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:51.626 "strip_size_kb": 0, 00:22:51.626 "state": "online", 00:22:51.626 "raid_level": "raid1", 00:22:51.626 "superblock": false, 00:22:51.626 "num_base_bdevs": 4, 00:22:51.626 "num_base_bdevs_discovered": 4, 00:22:51.626 "num_base_bdevs_operational": 4, 00:22:51.626 "process": { 00:22:51.626 "type": "rebuild", 00:22:51.626 "target": "spare", 00:22:51.626 "progress": { 00:22:51.626 "blocks": 14336, 00:22:51.626 "percent": 21 00:22:51.626 } 00:22:51.626 }, 00:22:51.626 "base_bdevs_list": [ 00:22:51.626 { 00:22:51.626 "name": "spare", 00:22:51.626 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:51.626 "is_configured": true, 00:22:51.626 "data_offset": 0, 00:22:51.626 "data_size": 65536 00:22:51.626 }, 00:22:51.626 { 00:22:51.626 "name": "BaseBdev2", 00:22:51.626 "uuid": "6763cd70-474c-4897-9d08-1c6806d7907a", 00:22:51.626 "is_configured": true, 00:22:51.626 "data_offset": 0, 00:22:51.626 "data_size": 65536 00:22:51.626 }, 00:22:51.626 { 00:22:51.626 "name": "BaseBdev3", 00:22:51.626 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:51.626 "is_configured": true, 00:22:51.626 "data_offset": 0, 00:22:51.626 "data_size": 65536 00:22:51.626 }, 00:22:51.626 { 00:22:51.626 "name": "BaseBdev4", 00:22:51.626 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:51.626 "is_configured": true, 00:22:51.626 "data_offset": 0, 00:22:51.626 "data_size": 65536 00:22:51.626 } 00:22:51.626 ] 00:22:51.626 }' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:51.626 10:36:45 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:51.627 10:36:45 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:51.886 [2024-07-12 10:36:45.592978] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:51.886 [2024-07-12 10:36:45.676903] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:51.886 [2024-07-12 10:36:45.797715] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:22:51.886 [2024-07-12 10:36:45.797764] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:22:51.886 [2024-07-12 10:36:45.798457] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:51.886 [2024-07-12 10:36:45.800013] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.166 10:36:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.166 10:36:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.166 "name": "raid_bdev1", 00:22:52.166 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:52.166 "strip_size_kb": 0, 00:22:52.166 "state": "online", 00:22:52.166 "raid_level": "raid1", 00:22:52.166 "superblock": false, 00:22:52.166 "num_base_bdevs": 4, 00:22:52.166 "num_base_bdevs_discovered": 3, 00:22:52.166 "num_base_bdevs_operational": 3, 00:22:52.166 "process": { 00:22:52.166 "type": "rebuild", 00:22:52.166 "target": "spare", 00:22:52.166 "progress": { 00:22:52.166 "blocks": 24576, 00:22:52.166 "percent": 37 00:22:52.166 } 00:22:52.166 }, 00:22:52.166 "base_bdevs_list": [ 00:22:52.166 { 00:22:52.166 "name": "spare", 00:22:52.166 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:52.166 "is_configured": true, 00:22:52.166 "data_offset": 0, 00:22:52.166 "data_size": 65536 00:22:52.166 }, 00:22:52.166 { 00:22:52.166 "name": null, 00:22:52.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.166 "is_configured": false, 00:22:52.166 "data_offset": 0, 00:22:52.166 "data_size": 65536 00:22:52.166 }, 00:22:52.166 { 00:22:52.166 "name": "BaseBdev3", 00:22:52.166 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:52.166 "is_configured": true, 00:22:52.166 "data_offset": 0, 00:22:52.166 "data_size": 65536 00:22:52.166 }, 00:22:52.166 { 00:22:52.166 "name": "BaseBdev4", 00:22:52.166 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:52.166 "is_configured": true, 00:22:52.166 "data_offset": 0, 00:22:52.166 "data_size": 65536 00:22:52.166 } 00:22:52.166 ] 00:22:52.166 }' 00:22:52.166 10:36:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@657 -- # local timeout=519 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.505 "name": "raid_bdev1", 00:22:52.505 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:52.505 "strip_size_kb": 0, 00:22:52.505 "state": "online", 00:22:52.505 "raid_level": "raid1", 00:22:52.505 "superblock": false, 00:22:52.505 "num_base_bdevs": 4, 00:22:52.505 "num_base_bdevs_discovered": 3, 00:22:52.505 "num_base_bdevs_operational": 3, 00:22:52.505 "process": { 00:22:52.505 "type": "rebuild", 00:22:52.505 "target": "spare", 00:22:52.505 "progress": { 00:22:52.505 "blocks": 30720, 00:22:52.505 "percent": 46 00:22:52.505 } 00:22:52.505 }, 00:22:52.505 "base_bdevs_list": [ 00:22:52.505 { 00:22:52.505 "name": "spare", 00:22:52.505 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 0, 00:22:52.505 "data_size": 65536 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": null, 00:22:52.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.505 "is_configured": false, 00:22:52.505 "data_offset": 0, 00:22:52.505 "data_size": 65536 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": "BaseBdev3", 00:22:52.505 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 0, 00:22:52.505 "data_size": 65536 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": "BaseBdev4", 00:22:52.505 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 0, 00:22:52.505 "data_size": 65536 00:22:52.505 } 00:22:52.505 ] 00:22:52.505 }' 00:22:52.505 10:36:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.775 10:36:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.775 10:36:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.775 [2024-07-12 10:36:46.424353] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:52.775 [2024-07-12 10:36:46.425199] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:52.775 10:36:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.775 10:36:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:52.775 [2024-07-12 10:36:46.641219] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:53.340 [2024-07-12 10:36:46.999132] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:53.341 [2024-07-12 10:36:47.106652] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:53.599 [2024-07-12 10:36:47.441766] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.599 10:36:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.857 10:36:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:53.857 "name": "raid_bdev1", 00:22:53.857 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:53.857 "strip_size_kb": 0, 00:22:53.857 "state": "online", 00:22:53.857 "raid_level": "raid1", 00:22:53.857 "superblock": false, 00:22:53.857 "num_base_bdevs": 4, 00:22:53.857 "num_base_bdevs_discovered": 3, 00:22:53.857 "num_base_bdevs_operational": 3, 00:22:53.857 "process": { 00:22:53.857 "type": "rebuild", 00:22:53.857 "target": "spare", 00:22:53.857 "progress": { 00:22:53.857 "blocks": 47104, 00:22:53.857 "percent": 71 00:22:53.857 } 00:22:53.857 }, 00:22:53.857 "base_bdevs_list": [ 00:22:53.857 { 00:22:53.857 "name": "spare", 00:22:53.857 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:53.857 "is_configured": true, 00:22:53.857 "data_offset": 0, 00:22:53.857 "data_size": 65536 00:22:53.857 }, 00:22:53.857 { 00:22:53.857 "name": null, 00:22:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.857 "is_configured": false, 00:22:53.857 "data_offset": 0, 00:22:53.857 "data_size": 65536 00:22:53.857 }, 00:22:53.857 { 00:22:53.857 "name": "BaseBdev3", 00:22:53.857 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:53.857 "is_configured": true, 00:22:53.857 "data_offset": 0, 00:22:53.857 "data_size": 65536 00:22:53.857 }, 00:22:53.857 { 00:22:53.857 "name": "BaseBdev4", 00:22:53.857 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:53.857 "is_configured": true, 00:22:53.857 "data_offset": 0, 00:22:53.857 "data_size": 65536 00:22:53.857 } 00:22:53.857 ] 00:22:53.857 }' 00:22:53.857 10:36:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:53.857 10:36:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.857 10:36:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.116 10:36:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.116 10:36:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:54.116 [2024-07-12 10:36:47.894068] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:54.374 [2024-07-12 10:36:48.108996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:54.633 [2024-07-12 10:36:48.527093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.892 10:36:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.151 10:36:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.151 [2024-07-12 10:36:48.976564] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:55.151 10:36:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.151 "name": "raid_bdev1", 00:22:55.151 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:55.151 "strip_size_kb": 0, 00:22:55.151 "state": "online", 00:22:55.151 "raid_level": "raid1", 00:22:55.151 "superblock": false, 00:22:55.151 "num_base_bdevs": 4, 00:22:55.151 "num_base_bdevs_discovered": 3, 00:22:55.151 "num_base_bdevs_operational": 3, 00:22:55.151 "process": { 00:22:55.151 "type": "rebuild", 00:22:55.151 "target": "spare", 00:22:55.151 "progress": { 00:22:55.151 "blocks": 65536, 00:22:55.151 "percent": 100 00:22:55.151 } 00:22:55.151 }, 00:22:55.151 "base_bdevs_list": [ 00:22:55.151 { 00:22:55.151 "name": "spare", 00:22:55.151 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:55.151 "is_configured": true, 00:22:55.151 "data_offset": 0, 00:22:55.151 "data_size": 65536 00:22:55.151 }, 00:22:55.151 { 00:22:55.151 "name": null, 00:22:55.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.151 "is_configured": false, 00:22:55.151 "data_offset": 0, 00:22:55.151 "data_size": 65536 00:22:55.151 }, 00:22:55.151 { 00:22:55.151 "name": "BaseBdev3", 00:22:55.151 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:55.151 "is_configured": true, 00:22:55.151 "data_offset": 0, 00:22:55.151 "data_size": 65536 00:22:55.151 }, 00:22:55.151 { 00:22:55.151 "name": "BaseBdev4", 00:22:55.151 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:55.151 "is_configured": true, 00:22:55.151 "data_offset": 0, 00:22:55.151 "data_size": 65536 00:22:55.151 } 00:22:55.151 ] 00:22:55.151 }' 00:22:55.151 10:36:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.410 [2024-07-12 10:36:49.082506] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:55.410 10:36:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.410 10:36:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.410 [2024-07-12 10:36:49.085858] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.410 10:36:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.410 10:36:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.345 10:36:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.604 10:36:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.604 "name": "raid_bdev1", 00:22:56.604 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:56.604 "strip_size_kb": 0, 00:22:56.604 "state": "online", 00:22:56.604 "raid_level": "raid1", 00:22:56.604 "superblock": false, 00:22:56.604 "num_base_bdevs": 4, 00:22:56.604 "num_base_bdevs_discovered": 3, 00:22:56.604 "num_base_bdevs_operational": 3, 00:22:56.604 "base_bdevs_list": [ 00:22:56.604 { 00:22:56.604 "name": "spare", 00:22:56.604 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:56.604 "is_configured": true, 00:22:56.604 "data_offset": 0, 00:22:56.604 "data_size": 65536 00:22:56.604 }, 00:22:56.604 { 00:22:56.604 "name": null, 00:22:56.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.604 "is_configured": false, 00:22:56.604 "data_offset": 0, 00:22:56.604 "data_size": 65536 00:22:56.604 }, 00:22:56.604 { 00:22:56.604 "name": "BaseBdev3", 00:22:56.604 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:56.604 "is_configured": true, 00:22:56.604 "data_offset": 0, 00:22:56.604 "data_size": 65536 00:22:56.604 }, 00:22:56.604 { 00:22:56.604 "name": "BaseBdev4", 00:22:56.604 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:56.604 "is_configured": true, 00:22:56.604 "data_offset": 0, 00:22:56.604 "data_size": 65536 00:22:56.604 } 00:22:56.604 ] 00:22:56.604 }' 00:22:56.604 10:36:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.604 10:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:56.604 10:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.863 10:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:56.863 10:36:50 -- bdev/bdev_raid.sh@660 -- # break 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.864 "name": "raid_bdev1", 00:22:56.864 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:56.864 "strip_size_kb": 0, 00:22:56.864 "state": "online", 00:22:56.864 "raid_level": "raid1", 00:22:56.864 "superblock": false, 00:22:56.864 "num_base_bdevs": 4, 00:22:56.864 "num_base_bdevs_discovered": 3, 00:22:56.864 "num_base_bdevs_operational": 3, 00:22:56.864 "base_bdevs_list": [ 00:22:56.864 { 00:22:56.864 "name": "spare", 00:22:56.864 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:56.864 "is_configured": true, 00:22:56.864 "data_offset": 0, 00:22:56.864 "data_size": 65536 00:22:56.864 }, 00:22:56.864 { 00:22:56.864 "name": null, 00:22:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.864 "is_configured": false, 00:22:56.864 "data_offset": 0, 00:22:56.864 "data_size": 65536 00:22:56.864 }, 00:22:56.864 { 00:22:56.864 "name": "BaseBdev3", 00:22:56.864 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:56.864 "is_configured": true, 00:22:56.864 "data_offset": 0, 00:22:56.864 "data_size": 65536 00:22:56.864 }, 00:22:56.864 { 00:22:56.864 "name": "BaseBdev4", 00:22:56.864 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:56.864 "is_configured": true, 00:22:56.864 "data_offset": 0, 00:22:56.864 "data_size": 65536 00:22:56.864 } 00:22:56.864 ] 00:22:56.864 }' 00:22:56.864 10:36:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.123 10:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.382 10:36:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.382 "name": "raid_bdev1", 00:22:57.382 "uuid": "ac392fe5-0e79-4933-a7a2-a6c8865fced0", 00:22:57.382 "strip_size_kb": 0, 00:22:57.382 "state": "online", 00:22:57.382 "raid_level": "raid1", 00:22:57.382 "superblock": false, 00:22:57.382 "num_base_bdevs": 4, 00:22:57.382 "num_base_bdevs_discovered": 3, 00:22:57.382 "num_base_bdevs_operational": 3, 00:22:57.382 "base_bdevs_list": [ 00:22:57.382 { 00:22:57.382 "name": "spare", 00:22:57.382 "uuid": "6e2a6a80-60b5-5ce5-a114-0e3f9d270ceb", 00:22:57.382 "is_configured": true, 00:22:57.382 "data_offset": 0, 00:22:57.382 "data_size": 65536 00:22:57.382 }, 00:22:57.382 { 00:22:57.382 "name": null, 00:22:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.382 "is_configured": false, 00:22:57.382 "data_offset": 0, 00:22:57.382 "data_size": 65536 00:22:57.382 }, 00:22:57.382 { 00:22:57.382 "name": "BaseBdev3", 00:22:57.382 "uuid": "7279375f-5fa8-48fc-88ef-cda970198058", 00:22:57.382 "is_configured": true, 00:22:57.382 "data_offset": 0, 00:22:57.382 "data_size": 65536 00:22:57.382 }, 00:22:57.382 { 00:22:57.382 "name": "BaseBdev4", 00:22:57.382 "uuid": "83c2973c-9e40-43ea-b807-d5e8b5b077c8", 00:22:57.382 "is_configured": true, 00:22:57.382 "data_offset": 0, 00:22:57.382 "data_size": 65536 00:22:57.382 } 00:22:57.382 ] 00:22:57.382 }' 00:22:57.382 10:36:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.382 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:22:57.949 10:36:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:58.208 [2024-07-12 10:36:51.991488] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.208 [2024-07-12 10:36:51.991534] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.208 00:22:58.208 Latency(us) 00:22:58.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.208 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:58.208 raid_bdev1 : 12.68 105.40 316.20 0.00 0.00 13701.39 303.48 117726.49 00:22:58.208 =================================================================================================================== 00:22:58.208 Total : 105.40 316.20 0.00 0.00 13701.39 303.48 117726.49 00:22:58.208 [2024-07-12 10:36:52.046162] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.208 [2024-07-12 10:36:52.046216] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.208 [2024-07-12 10:36:52.046297] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.208 [2024-07-12 10:36:52.046309] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:58.208 0 00:22:58.208 10:36:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.208 10:36:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:58.467 10:36:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:58.467 10:36:52 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:58.467 10:36:52 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@12 -- # local i 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:58.467 10:36:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:58.726 /dev/nbd0 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:58.726 10:36:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:58.726 10:36:52 -- common/autotest_common.sh@857 -- # local i 00:22:58.726 10:36:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:58.726 10:36:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:58.726 10:36:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:58.726 10:36:52 -- common/autotest_common.sh@861 -- # break 00:22:58.726 10:36:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:58.726 10:36:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:58.726 10:36:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.726 1+0 records in 00:22:58.726 1+0 records out 00:22:58.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487808 s, 8.4 MB/s 00:22:58.726 10:36:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.726 10:36:52 -- common/autotest_common.sh@874 -- # size=4096 00:22:58.726 10:36:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.726 10:36:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:58.726 10:36:52 -- common/autotest_common.sh@877 -- # return 0 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@678 -- # continue 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:58.726 10:36:52 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@12 -- # local i 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:58.726 10:36:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:58.984 /dev/nbd1 00:22:58.984 10:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:58.984 10:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:58.984 10:36:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:58.984 10:36:52 -- common/autotest_common.sh@857 -- # local i 00:22:58.984 10:36:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:58.984 10:36:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:58.984 10:36:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:58.984 10:36:52 -- common/autotest_common.sh@861 -- # break 00:22:58.984 10:36:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:58.984 10:36:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:58.984 10:36:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.984 1+0 records in 00:22:58.984 1+0 records out 00:22:58.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332966 s, 12.3 MB/s 00:22:58.984 10:36:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.984 10:36:52 -- common/autotest_common.sh@874 -- # size=4096 00:22:58.984 10:36:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.984 10:36:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:58.984 10:36:52 -- common/autotest_common.sh@877 -- # return 0 00:22:58.984 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:58.984 10:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:58.984 10:36:52 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:59.243 10:36:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@51 -- # local i 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.243 10:36:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@41 -- # break 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.500 10:36:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:59.500 10:36:53 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:59.500 10:36:53 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@12 -- # local i 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:59.500 10:36:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:59.757 /dev/nbd1 00:22:59.757 10:36:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:59.757 10:36:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:59.757 10:36:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:59.757 10:36:53 -- common/autotest_common.sh@857 -- # local i 00:22:59.757 10:36:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:59.757 10:36:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:59.757 10:36:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:59.757 10:36:53 -- common/autotest_common.sh@861 -- # break 00:22:59.757 10:36:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:59.757 10:36:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:59.757 10:36:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.757 1+0 records in 00:22:59.757 1+0 records out 00:22:59.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390142 s, 10.5 MB/s 00:22:59.757 10:36:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.757 10:36:53 -- common/autotest_common.sh@874 -- # size=4096 00:22:59.757 10:36:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.757 10:36:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:59.757 10:36:53 -- common/autotest_common.sh@877 -- # return 0 00:22:59.757 10:36:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:59.757 10:36:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:59.757 10:36:53 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:00.015 10:36:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@51 -- # local i 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:00.015 10:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:00.273 10:36:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@41 -- # break 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.273 10:36:54 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@51 -- # local i 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.273 10:36:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@41 -- # break 00:23:00.532 10:36:54 -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.532 10:36:54 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:00.532 10:36:54 -- bdev/bdev_raid.sh@709 -- # killprocess 129395 00:23:00.532 10:36:54 -- common/autotest_common.sh@926 -- # '[' -z 129395 ']' 00:23:00.532 10:36:54 -- common/autotest_common.sh@930 -- # kill -0 129395 00:23:00.532 10:36:54 -- common/autotest_common.sh@931 -- # uname 00:23:00.532 10:36:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.532 10:36:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129395 00:23:00.532 10:36:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:00.532 10:36:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:00.532 10:36:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129395' 00:23:00.532 killing process with pid 129395 00:23:00.532 10:36:54 -- common/autotest_common.sh@945 -- # kill 129395 00:23:00.532 Received shutdown signal, test time was about 15.033140 seconds 00:23:00.532 00:23:00.532 Latency(us) 00:23:00.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.532 =================================================================================================================== 00:23:00.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.532 10:36:54 -- common/autotest_common.sh@950 -- # wait 129395 00:23:00.532 [2024-07-12 10:36:54.379659] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.790 [2024-07-12 10:36:54.670624] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.166 ************************************ 00:23:02.166 END TEST raid_rebuild_test_io 00:23:02.166 ************************************ 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:02.166 00:23:02.166 real 0m20.611s 00:23:02.166 user 0m31.541s 00:23:02.166 sys 0m2.195s 00:23:02.166 10:36:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.166 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:23:02.166 10:36:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:02.166 10:36:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.166 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:23:02.166 ************************************ 00:23:02.166 START TEST raid_rebuild_test_sb_io 00:23:02.166 ************************************ 00:23:02.166 10:36:55 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=129974 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129974 /var/tmp/spdk-raid.sock 00:23:02.166 10:36:55 -- common/autotest_common.sh@819 -- # '[' -z 129974 ']' 00:23:02.166 10:36:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:02.166 10:36:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:02.166 10:36:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:02.166 10:36:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.166 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:23:02.166 10:36:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:02.166 [2024-07-12 10:36:55.866148] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:02.166 [2024-07-12 10:36:55.867051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129974 ] 00:23:02.166 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:02.166 Zero copy mechanism will not be used. 00:23:02.166 [2024-07-12 10:36:56.032649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.425 [2024-07-12 10:36:56.210131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.683 [2024-07-12 10:36:56.396199] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.942 10:36:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.942 10:36:56 -- common/autotest_common.sh@852 -- # return 0 00:23:02.942 10:36:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:02.942 10:36:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:02.942 10:36:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:03.200 BaseBdev1_malloc 00:23:03.200 10:36:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:03.457 [2024-07-12 10:36:57.210732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:03.457 [2024-07-12 10:36:57.210831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.457 [2024-07-12 10:36:57.210866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:03.457 [2024-07-12 10:36:57.210912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.457 [2024-07-12 10:36:57.213168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.457 [2024-07-12 10:36:57.213212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:03.457 BaseBdev1 00:23:03.457 10:36:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:03.457 10:36:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:03.457 10:36:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:03.715 BaseBdev2_malloc 00:23:03.715 10:36:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:03.973 [2024-07-12 10:36:57.671137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:03.973 [2024-07-12 10:36:57.671204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.973 [2024-07-12 10:36:57.671251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:03.973 [2024-07-12 10:36:57.671305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.973 [2024-07-12 10:36:57.673482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.973 [2024-07-12 10:36:57.673525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:03.973 BaseBdev2 00:23:03.973 10:36:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:03.973 10:36:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:03.973 10:36:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:04.231 BaseBdev3_malloc 00:23:04.231 10:36:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:04.231 [2024-07-12 10:36:58.072425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:04.231 [2024-07-12 10:36:58.072488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.231 [2024-07-12 10:36:58.072528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:04.231 [2024-07-12 10:36:58.072572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.231 [2024-07-12 10:36:58.074800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.231 [2024-07-12 10:36:58.074848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:04.231 BaseBdev3 00:23:04.231 10:36:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:04.232 10:36:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:04.232 10:36:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:04.490 BaseBdev4_malloc 00:23:04.490 10:36:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:04.750 [2024-07-12 10:36:58.581373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:04.750 [2024-07-12 10:36:58.581447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.750 [2024-07-12 10:36:58.581483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:04.750 [2024-07-12 10:36:58.581528] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.750 [2024-07-12 10:36:58.583733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.750 [2024-07-12 10:36:58.583781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:04.750 BaseBdev4 00:23:04.750 10:36:58 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:05.008 spare_malloc 00:23:05.008 10:36:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:05.265 spare_delay 00:23:05.265 10:36:59 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:05.523 [2024-07-12 10:36:59.206112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.523 [2024-07-12 10:36:59.206179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.523 [2024-07-12 10:36:59.206211] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:05.523 [2024-07-12 10:36:59.206254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.523 [2024-07-12 10:36:59.208435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.523 [2024-07-12 10:36:59.208489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.523 spare 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:05.523 [2024-07-12 10:36:59.398199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.523 [2024-07-12 10:36:59.400117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.523 [2024-07-12 10:36:59.400214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.523 [2024-07-12 10:36:59.400271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:05.523 [2024-07-12 10:36:59.400494] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:23:05.523 [2024-07-12 10:36:59.400515] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:05.523 [2024-07-12 10:36:59.400629] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:05.523 [2024-07-12 10:36:59.400954] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:23:05.523 [2024-07-12 10:36:59.400975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:23:05.523 [2024-07-12 10:36:59.401104] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.523 10:36:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.780 10:36:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.780 "name": "raid_bdev1", 00:23:05.780 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:05.780 "strip_size_kb": 0, 00:23:05.780 "state": "online", 00:23:05.780 "raid_level": "raid1", 00:23:05.780 "superblock": true, 00:23:05.780 "num_base_bdevs": 4, 00:23:05.780 "num_base_bdevs_discovered": 4, 00:23:05.780 "num_base_bdevs_operational": 4, 00:23:05.780 "base_bdevs_list": [ 00:23:05.780 { 00:23:05.780 "name": "BaseBdev1", 00:23:05.780 "uuid": "9ea09b6c-f6cd-5f33-a35a-898ea4b43e77", 00:23:05.780 "is_configured": true, 00:23:05.780 "data_offset": 2048, 00:23:05.780 "data_size": 63488 00:23:05.780 }, 00:23:05.780 { 00:23:05.780 "name": "BaseBdev2", 00:23:05.780 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:05.780 "is_configured": true, 00:23:05.780 "data_offset": 2048, 00:23:05.780 "data_size": 63488 00:23:05.780 }, 00:23:05.780 { 00:23:05.780 "name": "BaseBdev3", 00:23:05.780 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:05.780 "is_configured": true, 00:23:05.780 "data_offset": 2048, 00:23:05.780 "data_size": 63488 00:23:05.780 }, 00:23:05.780 { 00:23:05.780 "name": "BaseBdev4", 00:23:05.780 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:05.780 "is_configured": true, 00:23:05.780 "data_offset": 2048, 00:23:05.780 "data_size": 63488 00:23:05.780 } 00:23:05.780 ] 00:23:05.780 }' 00:23:05.780 10:36:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.780 10:36:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.713 10:37:00 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:06.713 10:37:00 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:06.713 [2024-07-12 10:37:00.506492] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:06.713 10:37:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:06.713 10:37:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:06.713 10:37:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.971 10:37:00 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:06.971 10:37:00 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:06.971 10:37:00 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:06.971 10:37:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:06.971 [2024-07-12 10:37:00.781593] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:06.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:06.971 Zero copy mechanism will not be used. 00:23:06.971 Running I/O for 60 seconds... 00:23:07.228 [2024-07-12 10:37:00.957704] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.228 [2024-07-12 10:37:00.963644] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.229 10:37:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.486 10:37:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.486 "name": "raid_bdev1", 00:23:07.486 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:07.486 "strip_size_kb": 0, 00:23:07.486 "state": "online", 00:23:07.486 "raid_level": "raid1", 00:23:07.486 "superblock": true, 00:23:07.486 "num_base_bdevs": 4, 00:23:07.486 "num_base_bdevs_discovered": 3, 00:23:07.486 "num_base_bdevs_operational": 3, 00:23:07.486 "base_bdevs_list": [ 00:23:07.486 { 00:23:07.486 "name": null, 00:23:07.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.486 "is_configured": false, 00:23:07.486 "data_offset": 2048, 00:23:07.486 "data_size": 63488 00:23:07.486 }, 00:23:07.486 { 00:23:07.486 "name": "BaseBdev2", 00:23:07.486 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:07.486 "is_configured": true, 00:23:07.486 "data_offset": 2048, 00:23:07.486 "data_size": 63488 00:23:07.486 }, 00:23:07.486 { 00:23:07.486 "name": "BaseBdev3", 00:23:07.486 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:07.486 "is_configured": true, 00:23:07.486 "data_offset": 2048, 00:23:07.486 "data_size": 63488 00:23:07.486 }, 00:23:07.486 { 00:23:07.486 "name": "BaseBdev4", 00:23:07.486 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:07.486 "is_configured": true, 00:23:07.486 "data_offset": 2048, 00:23:07.486 "data_size": 63488 00:23:07.486 } 00:23:07.486 ] 00:23:07.486 }' 00:23:07.486 10:37:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.486 10:37:01 -- common/autotest_common.sh@10 -- # set +x 00:23:08.050 10:37:01 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:08.308 [2024-07-12 10:37:02.006179] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:08.308 [2024-07-12 10:37:02.006262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.308 10:37:02 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:08.309 [2024-07-12 10:37:02.056708] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:08.309 [2024-07-12 10:37:02.058806] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:08.309 [2024-07-12 10:37:02.167525] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:08.309 [2024-07-12 10:37:02.168207] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:08.567 [2024-07-12 10:37:02.298676] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:08.826 [2024-07-12 10:37:02.550648] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:08.826 [2024-07-12 10:37:02.689113] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:09.084 [2024-07-12 10:37:02.903024] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:09.341 [2024-07-12 10:37:03.010015] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.341 10:37:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.597 10:37:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.597 "name": "raid_bdev1", 00:23:09.597 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:09.597 "strip_size_kb": 0, 00:23:09.597 "state": "online", 00:23:09.597 "raid_level": "raid1", 00:23:09.597 "superblock": true, 00:23:09.598 "num_base_bdevs": 4, 00:23:09.598 "num_base_bdevs_discovered": 4, 00:23:09.598 "num_base_bdevs_operational": 4, 00:23:09.598 "process": { 00:23:09.598 "type": "rebuild", 00:23:09.598 "target": "spare", 00:23:09.598 "progress": { 00:23:09.598 "blocks": 18432, 00:23:09.598 "percent": 29 00:23:09.598 } 00:23:09.598 }, 00:23:09.598 "base_bdevs_list": [ 00:23:09.598 { 00:23:09.598 "name": "spare", 00:23:09.598 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:09.598 "is_configured": true, 00:23:09.598 "data_offset": 2048, 00:23:09.598 "data_size": 63488 00:23:09.598 }, 00:23:09.598 { 00:23:09.598 "name": "BaseBdev2", 00:23:09.598 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:09.598 "is_configured": true, 00:23:09.598 "data_offset": 2048, 00:23:09.598 "data_size": 63488 00:23:09.598 }, 00:23:09.598 { 00:23:09.598 "name": "BaseBdev3", 00:23:09.598 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:09.598 "is_configured": true, 00:23:09.598 "data_offset": 2048, 00:23:09.598 "data_size": 63488 00:23:09.598 }, 00:23:09.598 { 00:23:09.598 "name": "BaseBdev4", 00:23:09.598 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:09.598 "is_configured": true, 00:23:09.598 "data_offset": 2048, 00:23:09.598 "data_size": 63488 00:23:09.598 } 00:23:09.598 ] 00:23:09.598 }' 00:23:09.598 10:37:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.598 [2024-07-12 10:37:03.368321] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:09.598 10:37:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.598 10:37:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.598 10:37:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.598 10:37:03 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:09.855 [2024-07-12 10:37:03.596424] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:09.855 [2024-07-12 10:37:03.688560] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:09.855 [2024-07-12 10:37:03.721824] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:09.855 [2024-07-12 10:37:03.730572] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.855 [2024-07-12 10:37:03.762710] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.113 10:37:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.371 10:37:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.371 "name": "raid_bdev1", 00:23:10.371 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:10.371 "strip_size_kb": 0, 00:23:10.371 "state": "online", 00:23:10.371 "raid_level": "raid1", 00:23:10.371 "superblock": true, 00:23:10.371 "num_base_bdevs": 4, 00:23:10.371 "num_base_bdevs_discovered": 3, 00:23:10.371 "num_base_bdevs_operational": 3, 00:23:10.371 "base_bdevs_list": [ 00:23:10.371 { 00:23:10.371 "name": null, 00:23:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.371 "is_configured": false, 00:23:10.371 "data_offset": 2048, 00:23:10.371 "data_size": 63488 00:23:10.371 }, 00:23:10.371 { 00:23:10.371 "name": "BaseBdev2", 00:23:10.371 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:10.371 "is_configured": true, 00:23:10.371 "data_offset": 2048, 00:23:10.371 "data_size": 63488 00:23:10.371 }, 00:23:10.371 { 00:23:10.371 "name": "BaseBdev3", 00:23:10.371 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:10.371 "is_configured": true, 00:23:10.371 "data_offset": 2048, 00:23:10.371 "data_size": 63488 00:23:10.371 }, 00:23:10.371 { 00:23:10.371 "name": "BaseBdev4", 00:23:10.371 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:10.371 "is_configured": true, 00:23:10.371 "data_offset": 2048, 00:23:10.371 "data_size": 63488 00:23:10.371 } 00:23:10.371 ] 00:23:10.371 }' 00:23:10.371 10:37:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.371 10:37:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.938 10:37:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.196 10:37:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:11.196 "name": "raid_bdev1", 00:23:11.196 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:11.196 "strip_size_kb": 0, 00:23:11.196 "state": "online", 00:23:11.196 "raid_level": "raid1", 00:23:11.196 "superblock": true, 00:23:11.196 "num_base_bdevs": 4, 00:23:11.196 "num_base_bdevs_discovered": 3, 00:23:11.196 "num_base_bdevs_operational": 3, 00:23:11.196 "base_bdevs_list": [ 00:23:11.196 { 00:23:11.196 "name": null, 00:23:11.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.196 "is_configured": false, 00:23:11.196 "data_offset": 2048, 00:23:11.196 "data_size": 63488 00:23:11.196 }, 00:23:11.196 { 00:23:11.196 "name": "BaseBdev2", 00:23:11.196 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:11.196 "is_configured": true, 00:23:11.196 "data_offset": 2048, 00:23:11.196 "data_size": 63488 00:23:11.196 }, 00:23:11.196 { 00:23:11.196 "name": "BaseBdev3", 00:23:11.196 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:11.196 "is_configured": true, 00:23:11.196 "data_offset": 2048, 00:23:11.196 "data_size": 63488 00:23:11.196 }, 00:23:11.196 { 00:23:11.196 "name": "BaseBdev4", 00:23:11.197 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:11.197 "is_configured": true, 00:23:11.197 "data_offset": 2048, 00:23:11.197 "data_size": 63488 00:23:11.197 } 00:23:11.197 ] 00:23:11.197 }' 00:23:11.197 10:37:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:11.197 10:37:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:11.197 10:37:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:11.197 10:37:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:11.197 10:37:05 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:11.455 [2024-07-12 10:37:05.211703] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:11.455 [2024-07-12 10:37:05.211782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:11.455 10:37:05 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:11.455 [2024-07-12 10:37:05.254789] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:11.455 [2024-07-12 10:37:05.256888] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:11.714 [2024-07-12 10:37:05.380170] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:11.714 [2024-07-12 10:37:05.381565] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:11.714 [2024-07-12 10:37:05.606388] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:11.714 [2024-07-12 10:37:05.606709] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:11.972 [2024-07-12 10:37:05.858926] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:12.230 [2024-07-12 10:37:06.070961] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.489 10:37:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.489 [2024-07-12 10:37:06.328465] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:12.747 "name": "raid_bdev1", 00:23:12.747 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:12.747 "strip_size_kb": 0, 00:23:12.747 "state": "online", 00:23:12.747 "raid_level": "raid1", 00:23:12.747 "superblock": true, 00:23:12.747 "num_base_bdevs": 4, 00:23:12.747 "num_base_bdevs_discovered": 4, 00:23:12.747 "num_base_bdevs_operational": 4, 00:23:12.747 "process": { 00:23:12.747 "type": "rebuild", 00:23:12.747 "target": "spare", 00:23:12.747 "progress": { 00:23:12.747 "blocks": 14336, 00:23:12.747 "percent": 22 00:23:12.747 } 00:23:12.747 }, 00:23:12.747 "base_bdevs_list": [ 00:23:12.747 { 00:23:12.747 "name": "spare", 00:23:12.747 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:12.747 "is_configured": true, 00:23:12.747 "data_offset": 2048, 00:23:12.747 "data_size": 63488 00:23:12.747 }, 00:23:12.747 { 00:23:12.747 "name": "BaseBdev2", 00:23:12.747 "uuid": "435fe70a-f927-526d-a330-f4a01cbd5a53", 00:23:12.747 "is_configured": true, 00:23:12.747 "data_offset": 2048, 00:23:12.747 "data_size": 63488 00:23:12.747 }, 00:23:12.747 { 00:23:12.747 "name": "BaseBdev3", 00:23:12.747 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:12.747 "is_configured": true, 00:23:12.747 "data_offset": 2048, 00:23:12.747 "data_size": 63488 00:23:12.747 }, 00:23:12.747 { 00:23:12.747 "name": "BaseBdev4", 00:23:12.747 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:12.747 "is_configured": true, 00:23:12.747 "data_offset": 2048, 00:23:12.747 "data_size": 63488 00:23:12.747 } 00:23:12.747 ] 00:23:12.747 }' 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:12.747 [2024-07-12 10:37:06.533560] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:12.747 [2024-07-12 10:37:06.533760] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:12.747 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:12.747 10:37:06 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:13.006 [2024-07-12 10:37:06.784360] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:13.006 [2024-07-12 10:37:06.854535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:13.264 [2024-07-12 10:37:06.932174] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:23:13.264 [2024-07-12 10:37:06.932211] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.264 10:37:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.522 10:37:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.522 "name": "raid_bdev1", 00:23:13.522 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:13.522 "strip_size_kb": 0, 00:23:13.522 "state": "online", 00:23:13.522 "raid_level": "raid1", 00:23:13.522 "superblock": true, 00:23:13.522 "num_base_bdevs": 4, 00:23:13.522 "num_base_bdevs_discovered": 3, 00:23:13.522 "num_base_bdevs_operational": 3, 00:23:13.522 "process": { 00:23:13.522 "type": "rebuild", 00:23:13.522 "target": "spare", 00:23:13.522 "progress": { 00:23:13.522 "blocks": 26624, 00:23:13.522 "percent": 41 00:23:13.522 } 00:23:13.522 }, 00:23:13.522 "base_bdevs_list": [ 00:23:13.522 { 00:23:13.522 "name": "spare", 00:23:13.522 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:13.522 "is_configured": true, 00:23:13.522 "data_offset": 2048, 00:23:13.522 "data_size": 63488 00:23:13.522 }, 00:23:13.522 { 00:23:13.522 "name": null, 00:23:13.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.522 "is_configured": false, 00:23:13.522 "data_offset": 2048, 00:23:13.522 "data_size": 63488 00:23:13.522 }, 00:23:13.522 { 00:23:13.522 "name": "BaseBdev3", 00:23:13.522 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:13.522 "is_configured": true, 00:23:13.522 "data_offset": 2048, 00:23:13.522 "data_size": 63488 00:23:13.522 }, 00:23:13.522 { 00:23:13.522 "name": "BaseBdev4", 00:23:13.522 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:13.522 "is_configured": true, 00:23:13.522 "data_offset": 2048, 00:23:13.522 "data_size": 63488 00:23:13.522 } 00:23:13.522 ] 00:23:13.522 }' 00:23:13.522 10:37:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.522 10:37:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.522 10:37:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@657 -- # local timeout=540 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.523 10:37:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.780 [2024-07-12 10:37:07.552954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:13.780 10:37:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.780 "name": "raid_bdev1", 00:23:13.780 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:13.780 "strip_size_kb": 0, 00:23:13.780 "state": "online", 00:23:13.780 "raid_level": "raid1", 00:23:13.780 "superblock": true, 00:23:13.780 "num_base_bdevs": 4, 00:23:13.780 "num_base_bdevs_discovered": 3, 00:23:13.780 "num_base_bdevs_operational": 3, 00:23:13.780 "process": { 00:23:13.780 "type": "rebuild", 00:23:13.780 "target": "spare", 00:23:13.780 "progress": { 00:23:13.780 "blocks": 32768, 00:23:13.780 "percent": 51 00:23:13.780 } 00:23:13.780 }, 00:23:13.780 "base_bdevs_list": [ 00:23:13.780 { 00:23:13.780 "name": "spare", 00:23:13.780 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:13.780 "is_configured": true, 00:23:13.780 "data_offset": 2048, 00:23:13.780 "data_size": 63488 00:23:13.780 }, 00:23:13.780 { 00:23:13.780 "name": null, 00:23:13.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.780 "is_configured": false, 00:23:13.780 "data_offset": 2048, 00:23:13.780 "data_size": 63488 00:23:13.780 }, 00:23:13.780 { 00:23:13.780 "name": "BaseBdev3", 00:23:13.780 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:13.780 "is_configured": true, 00:23:13.780 "data_offset": 2048, 00:23:13.780 "data_size": 63488 00:23:13.780 }, 00:23:13.780 { 00:23:13.780 "name": "BaseBdev4", 00:23:13.780 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:13.780 "is_configured": true, 00:23:13.780 "data_offset": 2048, 00:23:13.780 "data_size": 63488 00:23:13.780 } 00:23:13.780 ] 00:23:13.780 }' 00:23:13.780 10:37:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.780 10:37:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.780 10:37:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:14.037 10:37:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.037 10:37:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:14.602 [2024-07-12 10:37:08.467319] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.860 10:37:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.118 [2024-07-12 10:37:08.794405] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:15.118 10:37:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:15.118 "name": "raid_bdev1", 00:23:15.118 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:15.118 "strip_size_kb": 0, 00:23:15.118 "state": "online", 00:23:15.118 "raid_level": "raid1", 00:23:15.118 "superblock": true, 00:23:15.118 "num_base_bdevs": 4, 00:23:15.118 "num_base_bdevs_discovered": 3, 00:23:15.118 "num_base_bdevs_operational": 3, 00:23:15.118 "process": { 00:23:15.118 "type": "rebuild", 00:23:15.118 "target": "spare", 00:23:15.118 "progress": { 00:23:15.118 "blocks": 53248, 00:23:15.118 "percent": 83 00:23:15.118 } 00:23:15.118 }, 00:23:15.118 "base_bdevs_list": [ 00:23:15.118 { 00:23:15.118 "name": "spare", 00:23:15.118 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:15.118 "is_configured": true, 00:23:15.118 "data_offset": 2048, 00:23:15.118 "data_size": 63488 00:23:15.118 }, 00:23:15.118 { 00:23:15.118 "name": null, 00:23:15.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.118 "is_configured": false, 00:23:15.118 "data_offset": 2048, 00:23:15.118 "data_size": 63488 00:23:15.118 }, 00:23:15.118 { 00:23:15.118 "name": "BaseBdev3", 00:23:15.118 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:15.118 "is_configured": true, 00:23:15.118 "data_offset": 2048, 00:23:15.118 "data_size": 63488 00:23:15.118 }, 00:23:15.118 { 00:23:15.118 "name": "BaseBdev4", 00:23:15.118 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:15.118 "is_configured": true, 00:23:15.118 "data_offset": 2048, 00:23:15.118 "data_size": 63488 00:23:15.118 } 00:23:15.118 ] 00:23:15.118 }' 00:23:15.118 10:37:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:15.118 10:37:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.118 10:37:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:15.375 10:37:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.375 10:37:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:15.632 [2024-07-12 10:37:09.459191] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:15.889 [2024-07-12 10:37:09.565072] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:15.889 [2024-07-12 10:37:09.567630] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.454 "name": "raid_bdev1", 00:23:16.454 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:16.454 "strip_size_kb": 0, 00:23:16.454 "state": "online", 00:23:16.454 "raid_level": "raid1", 00:23:16.454 "superblock": true, 00:23:16.454 "num_base_bdevs": 4, 00:23:16.454 "num_base_bdevs_discovered": 3, 00:23:16.454 "num_base_bdevs_operational": 3, 00:23:16.454 "base_bdevs_list": [ 00:23:16.454 { 00:23:16.454 "name": "spare", 00:23:16.454 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:16.454 "is_configured": true, 00:23:16.454 "data_offset": 2048, 00:23:16.454 "data_size": 63488 00:23:16.454 }, 00:23:16.454 { 00:23:16.454 "name": null, 00:23:16.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.454 "is_configured": false, 00:23:16.454 "data_offset": 2048, 00:23:16.454 "data_size": 63488 00:23:16.454 }, 00:23:16.454 { 00:23:16.454 "name": "BaseBdev3", 00:23:16.454 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:16.454 "is_configured": true, 00:23:16.454 "data_offset": 2048, 00:23:16.454 "data_size": 63488 00:23:16.454 }, 00:23:16.454 { 00:23:16.454 "name": "BaseBdev4", 00:23:16.454 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:16.454 "is_configured": true, 00:23:16.454 "data_offset": 2048, 00:23:16.454 "data_size": 63488 00:23:16.454 } 00:23:16.454 ] 00:23:16.454 }' 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:16.454 10:37:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@660 -- # break 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.712 10:37:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.969 10:37:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.969 "name": "raid_bdev1", 00:23:16.969 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:16.969 "strip_size_kb": 0, 00:23:16.969 "state": "online", 00:23:16.970 "raid_level": "raid1", 00:23:16.970 "superblock": true, 00:23:16.970 "num_base_bdevs": 4, 00:23:16.970 "num_base_bdevs_discovered": 3, 00:23:16.970 "num_base_bdevs_operational": 3, 00:23:16.970 "base_bdevs_list": [ 00:23:16.970 { 00:23:16.970 "name": "spare", 00:23:16.970 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:16.970 "is_configured": true, 00:23:16.970 "data_offset": 2048, 00:23:16.970 "data_size": 63488 00:23:16.970 }, 00:23:16.970 { 00:23:16.970 "name": null, 00:23:16.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.970 "is_configured": false, 00:23:16.970 "data_offset": 2048, 00:23:16.970 "data_size": 63488 00:23:16.970 }, 00:23:16.970 { 00:23:16.970 "name": "BaseBdev3", 00:23:16.970 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:16.970 "is_configured": true, 00:23:16.970 "data_offset": 2048, 00:23:16.970 "data_size": 63488 00:23:16.970 }, 00:23:16.970 { 00:23:16.970 "name": "BaseBdev4", 00:23:16.970 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:16.970 "is_configured": true, 00:23:16.970 "data_offset": 2048, 00:23:16.970 "data_size": 63488 00:23:16.970 } 00:23:16.970 ] 00:23:16.970 }' 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.970 10:37:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.231 10:37:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.231 "name": "raid_bdev1", 00:23:17.231 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:17.231 "strip_size_kb": 0, 00:23:17.231 "state": "online", 00:23:17.231 "raid_level": "raid1", 00:23:17.231 "superblock": true, 00:23:17.231 "num_base_bdevs": 4, 00:23:17.231 "num_base_bdevs_discovered": 3, 00:23:17.231 "num_base_bdevs_operational": 3, 00:23:17.231 "base_bdevs_list": [ 00:23:17.231 { 00:23:17.231 "name": "spare", 00:23:17.231 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:17.231 "is_configured": true, 00:23:17.231 "data_offset": 2048, 00:23:17.231 "data_size": 63488 00:23:17.231 }, 00:23:17.231 { 00:23:17.231 "name": null, 00:23:17.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.231 "is_configured": false, 00:23:17.231 "data_offset": 2048, 00:23:17.231 "data_size": 63488 00:23:17.231 }, 00:23:17.231 { 00:23:17.231 "name": "BaseBdev3", 00:23:17.231 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:17.231 "is_configured": true, 00:23:17.231 "data_offset": 2048, 00:23:17.231 "data_size": 63488 00:23:17.231 }, 00:23:17.231 { 00:23:17.231 "name": "BaseBdev4", 00:23:17.231 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:17.231 "is_configured": true, 00:23:17.231 "data_offset": 2048, 00:23:17.231 "data_size": 63488 00:23:17.231 } 00:23:17.231 ] 00:23:17.231 }' 00:23:17.231 10:37:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.231 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:23:17.819 10:37:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:18.077 [2024-07-12 10:37:11.877518] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.077 [2024-07-12 10:37:11.877589] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.077 00:23:18.077 Latency(us) 00:23:18.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.077 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:18.077 raid_bdev1 : 11.16 110.11 330.32 0.00 0.00 13035.05 303.48 114866.73 00:23:18.077 =================================================================================================================== 00:23:18.077 Total : 110.11 330.32 0.00 0.00 13035.05 303.48 114866.73 00:23:18.077 [2024-07-12 10:37:11.960273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.077 [2024-07-12 10:37:11.960316] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.077 [2024-07-12 10:37:11.960420] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.077 [2024-07-12 10:37:11.960433] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:18.077 0 00:23:18.077 10:37:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.077 10:37:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:18.335 10:37:12 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:18.335 10:37:12 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:18.335 10:37:12 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@12 -- # local i 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:18.335 10:37:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:18.594 /dev/nbd0 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:18.594 10:37:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:18.594 10:37:12 -- common/autotest_common.sh@857 -- # local i 00:23:18.594 10:37:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:18.594 10:37:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:18.594 10:37:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:18.594 10:37:12 -- common/autotest_common.sh@861 -- # break 00:23:18.594 10:37:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:18.594 10:37:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:18.594 10:37:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:18.594 1+0 records in 00:23:18.594 1+0 records out 00:23:18.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485625 s, 8.4 MB/s 00:23:18.594 10:37:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:18.594 10:37:12 -- common/autotest_common.sh@874 -- # size=4096 00:23:18.594 10:37:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:18.594 10:37:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:18.594 10:37:12 -- common/autotest_common.sh@877 -- # return 0 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@678 -- # continue 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:18.594 10:37:12 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@12 -- # local i 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:18.594 10:37:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:18.852 /dev/nbd1 00:23:18.852 10:37:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:18.852 10:37:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:18.852 10:37:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:18.852 10:37:12 -- common/autotest_common.sh@857 -- # local i 00:23:18.852 10:37:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:18.852 10:37:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:18.852 10:37:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:18.852 10:37:12 -- common/autotest_common.sh@861 -- # break 00:23:18.852 10:37:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:18.852 10:37:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:18.852 10:37:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:18.852 1+0 records in 00:23:18.852 1+0 records out 00:23:18.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513847 s, 8.0 MB/s 00:23:18.852 10:37:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:18.852 10:37:12 -- common/autotest_common.sh@874 -- # size=4096 00:23:18.852 10:37:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:18.852 10:37:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:18.852 10:37:12 -- common/autotest_common.sh@877 -- # return 0 00:23:18.852 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:18.852 10:37:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:18.852 10:37:12 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:19.111 10:37:12 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@51 -- # local i 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.111 10:37:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@41 -- # break 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@45 -- # return 0 00:23:19.368 10:37:13 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:19.368 10:37:13 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:19.368 10:37:13 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@12 -- # local i 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.368 10:37:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:19.368 /dev/nbd1 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:19.625 10:37:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:19.625 10:37:13 -- common/autotest_common.sh@857 -- # local i 00:23:19.625 10:37:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:19.625 10:37:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:19.625 10:37:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:19.625 10:37:13 -- common/autotest_common.sh@861 -- # break 00:23:19.625 10:37:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:19.625 10:37:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:19.625 10:37:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.625 1+0 records in 00:23:19.625 1+0 records out 00:23:19.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049137 s, 8.3 MB/s 00:23:19.625 10:37:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.625 10:37:13 -- common/autotest_common.sh@874 -- # size=4096 00:23:19.625 10:37:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.625 10:37:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:19.625 10:37:13 -- common/autotest_common.sh@877 -- # return 0 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.625 10:37:13 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:19.625 10:37:13 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@51 -- # local i 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.625 10:37:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:19.882 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@41 -- # break 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@45 -- # return 0 00:23:19.883 10:37:13 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@51 -- # local i 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.883 10:37:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@41 -- # break 00:23:20.140 10:37:13 -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.140 10:37:13 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:20.140 10:37:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:20.140 10:37:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:20.140 10:37:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:20.397 10:37:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:20.655 [2024-07-12 10:37:14.491526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:20.655 [2024-07-12 10:37:14.491620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.655 [2024-07-12 10:37:14.491662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:20.655 [2024-07-12 10:37:14.491684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.655 [2024-07-12 10:37:14.494115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.655 [2024-07-12 10:37:14.494182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:20.655 [2024-07-12 10:37:14.494286] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:20.655 [2024-07-12 10:37:14.494352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.655 BaseBdev1 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@696 -- # continue 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:20.655 10:37:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:20.913 10:37:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:21.170 [2024-07-12 10:37:14.855632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:21.170 [2024-07-12 10:37:14.855687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.170 [2024-07-12 10:37:14.855723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:21.170 [2024-07-12 10:37:14.855742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.170 [2024-07-12 10:37:14.856105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.170 [2024-07-12 10:37:14.856157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:21.170 [2024-07-12 10:37:14.856251] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:21.170 [2024-07-12 10:37:14.856265] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:21.170 [2024-07-12 10:37:14.856273] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.170 [2024-07-12 10:37:14.856290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:23:21.170 [2024-07-12 10:37:14.856357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.170 BaseBdev3 00:23:21.170 10:37:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:21.170 10:37:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:21.170 10:37:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:21.427 10:37:15 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:21.427 [2024-07-12 10:37:15.267716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:21.427 [2024-07-12 10:37:15.267781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.427 [2024-07-12 10:37:15.267813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:21.427 [2024-07-12 10:37:15.267845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.427 [2024-07-12 10:37:15.268202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.427 [2024-07-12 10:37:15.268250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:21.427 [2024-07-12 10:37:15.268336] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:21.427 [2024-07-12 10:37:15.268358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.427 BaseBdev4 00:23:21.427 10:37:15 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:21.684 10:37:15 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:21.942 [2024-07-12 10:37:15.627855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:21.942 [2024-07-12 10:37:15.627914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.942 [2024-07-12 10:37:15.627943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:23:21.942 [2024-07-12 10:37:15.627969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.942 [2024-07-12 10:37:15.628355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.942 [2024-07-12 10:37:15.628407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:21.942 [2024-07-12 10:37:15.628494] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:21.942 [2024-07-12 10:37:15.628526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:21.942 spare 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.942 10:37:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.942 [2024-07-12 10:37:15.728636] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:23:21.942 [2024-07-12 10:37:15.728657] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:21.942 [2024-07-12 10:37:15.728769] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a220 00:23:21.942 [2024-07-12 10:37:15.729123] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:23:21.942 [2024-07-12 10:37:15.729136] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:23:21.942 [2024-07-12 10:37:15.729254] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.200 10:37:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.200 "name": "raid_bdev1", 00:23:22.200 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:22.200 "strip_size_kb": 0, 00:23:22.200 "state": "online", 00:23:22.200 "raid_level": "raid1", 00:23:22.200 "superblock": true, 00:23:22.200 "num_base_bdevs": 4, 00:23:22.200 "num_base_bdevs_discovered": 3, 00:23:22.200 "num_base_bdevs_operational": 3, 00:23:22.200 "base_bdevs_list": [ 00:23:22.200 { 00:23:22.200 "name": "spare", 00:23:22.200 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:22.200 "is_configured": true, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 }, 00:23:22.200 { 00:23:22.200 "name": null, 00:23:22.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.200 "is_configured": false, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 }, 00:23:22.200 { 00:23:22.200 "name": "BaseBdev3", 00:23:22.200 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:22.200 "is_configured": true, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 }, 00:23:22.200 { 00:23:22.200 "name": "BaseBdev4", 00:23:22.200 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:22.200 "is_configured": true, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 } 00:23:22.200 ] 00:23:22.200 }' 00:23:22.200 10:37:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.200 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.766 10:37:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.024 "name": "raid_bdev1", 00:23:23.024 "uuid": "50eb36a5-191d-4307-a9e9-185caf35f525", 00:23:23.024 "strip_size_kb": 0, 00:23:23.024 "state": "online", 00:23:23.024 "raid_level": "raid1", 00:23:23.024 "superblock": true, 00:23:23.024 "num_base_bdevs": 4, 00:23:23.024 "num_base_bdevs_discovered": 3, 00:23:23.024 "num_base_bdevs_operational": 3, 00:23:23.024 "base_bdevs_list": [ 00:23:23.024 { 00:23:23.024 "name": "spare", 00:23:23.024 "uuid": "0ed469c9-a864-5065-b6cb-9121d245ba37", 00:23:23.024 "is_configured": true, 00:23:23.024 "data_offset": 2048, 00:23:23.024 "data_size": 63488 00:23:23.024 }, 00:23:23.024 { 00:23:23.024 "name": null, 00:23:23.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.024 "is_configured": false, 00:23:23.024 "data_offset": 2048, 00:23:23.024 "data_size": 63488 00:23:23.024 }, 00:23:23.024 { 00:23:23.024 "name": "BaseBdev3", 00:23:23.024 "uuid": "6cfc8200-b457-5f44-b0a4-2a36bafd6c18", 00:23:23.024 "is_configured": true, 00:23:23.024 "data_offset": 2048, 00:23:23.024 "data_size": 63488 00:23:23.024 }, 00:23:23.024 { 00:23:23.024 "name": "BaseBdev4", 00:23:23.024 "uuid": "98bb5746-bb54-5764-9376-72288f9ca1f4", 00:23:23.024 "is_configured": true, 00:23:23.024 "data_offset": 2048, 00:23:23.024 "data_size": 63488 00:23:23.024 } 00:23:23.024 ] 00:23:23.024 }' 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.024 10:37:16 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:23.282 10:37:17 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.282 10:37:17 -- bdev/bdev_raid.sh@709 -- # killprocess 129974 00:23:23.282 10:37:17 -- common/autotest_common.sh@926 -- # '[' -z 129974 ']' 00:23:23.282 10:37:17 -- common/autotest_common.sh@930 -- # kill -0 129974 00:23:23.282 10:37:17 -- common/autotest_common.sh@931 -- # uname 00:23:23.282 10:37:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:23.282 10:37:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129974 00:23:23.282 killing process with pid 129974 00:23:23.282 Received shutdown signal, test time was about 16.291075 seconds 00:23:23.282 00:23:23.282 Latency(us) 00:23:23.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.282 =================================================================================================================== 00:23:23.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.282 10:37:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:23.282 10:37:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:23.282 10:37:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129974' 00:23:23.282 10:37:17 -- common/autotest_common.sh@945 -- # kill 129974 00:23:23.282 10:37:17 -- common/autotest_common.sh@950 -- # wait 129974 00:23:23.282 [2024-07-12 10:37:17.074574] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.282 [2024-07-12 10:37:17.074643] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.282 [2024-07-12 10:37:17.074711] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.282 [2024-07-12 10:37:17.074722] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:23:23.541 [2024-07-12 10:37:17.365004] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.917 ************************************ 00:23:24.917 END TEST raid_rebuild_test_sb_io 00:23:24.917 ************************************ 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:24.917 00:23:24.917 real 0m22.634s 00:23:24.917 user 0m36.748s 00:23:24.917 sys 0m2.424s 00:23:24.917 10:37:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.917 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:24.917 10:37:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:24.917 10:37:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:24.917 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:23:24.917 ************************************ 00:23:24.917 START TEST raid5f_state_function_test 00:23:24.917 ************************************ 00:23:24.917 10:37:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=130611 00:23:24.917 Process raid pid: 130611 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130611' 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130611 /var/tmp/spdk-raid.sock 00:23:24.917 10:37:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:24.917 10:37:18 -- common/autotest_common.sh@819 -- # '[' -z 130611 ']' 00:23:24.917 10:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:24.917 10:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:24.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:24.917 10:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:24.917 10:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:24.917 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:23:24.917 [2024-07-12 10:37:18.549519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:24.917 [2024-07-12 10:37:18.549692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.917 [2024-07-12 10:37:18.700932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.176 [2024-07-12 10:37:18.882285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.176 [2024-07-12 10:37:19.069839] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.743 10:37:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:25.743 10:37:19 -- common/autotest_common.sh@852 -- # return 0 00:23:25.743 10:37:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:25.743 [2024-07-12 10:37:19.645601] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.743 [2024-07-12 10:37:19.645690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.743 [2024-07-12 10:37:19.645703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.743 [2024-07-12 10:37:19.645723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.743 [2024-07-12 10:37:19.645730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.743 [2024-07-12 10:37:19.645774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.002 "name": "Existed_Raid", 00:23:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.002 "strip_size_kb": 64, 00:23:26.002 "state": "configuring", 00:23:26.002 "raid_level": "raid5f", 00:23:26.002 "superblock": false, 00:23:26.002 "num_base_bdevs": 3, 00:23:26.002 "num_base_bdevs_discovered": 0, 00:23:26.002 "num_base_bdevs_operational": 3, 00:23:26.002 "base_bdevs_list": [ 00:23:26.002 { 00:23:26.002 "name": "BaseBdev1", 00:23:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.002 "is_configured": false, 00:23:26.002 "data_offset": 0, 00:23:26.002 "data_size": 0 00:23:26.002 }, 00:23:26.002 { 00:23:26.002 "name": "BaseBdev2", 00:23:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.002 "is_configured": false, 00:23:26.002 "data_offset": 0, 00:23:26.002 "data_size": 0 00:23:26.002 }, 00:23:26.002 { 00:23:26.002 "name": "BaseBdev3", 00:23:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.002 "is_configured": false, 00:23:26.002 "data_offset": 0, 00:23:26.002 "data_size": 0 00:23:26.002 } 00:23:26.002 ] 00:23:26.002 }' 00:23:26.002 10:37:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.002 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:23:26.936 10:37:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:26.936 [2024-07-12 10:37:20.753691] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:26.936 [2024-07-12 10:37:20.753727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:26.936 10:37:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:27.193 [2024-07-12 10:37:20.933756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:27.193 [2024-07-12 10:37:20.933808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:27.193 [2024-07-12 10:37:20.933819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:27.193 [2024-07-12 10:37:20.933837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:27.193 [2024-07-12 10:37:20.933843] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:27.193 [2024-07-12 10:37:20.933873] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:27.193 10:37:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:27.450 [2024-07-12 10:37:21.151506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.450 BaseBdev1 00:23:27.450 10:37:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:27.450 10:37:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:27.450 10:37:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:27.450 10:37:21 -- common/autotest_common.sh@889 -- # local i 00:23:27.450 10:37:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:27.450 10:37:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:27.450 10:37:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:27.450 10:37:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:27.708 [ 00:23:27.708 { 00:23:27.708 "name": "BaseBdev1", 00:23:27.708 "aliases": [ 00:23:27.708 "21752d0e-77cc-4282-9316-9dbb7e4d4249" 00:23:27.708 ], 00:23:27.708 "product_name": "Malloc disk", 00:23:27.708 "block_size": 512, 00:23:27.708 "num_blocks": 65536, 00:23:27.708 "uuid": "21752d0e-77cc-4282-9316-9dbb7e4d4249", 00:23:27.708 "assigned_rate_limits": { 00:23:27.708 "rw_ios_per_sec": 0, 00:23:27.708 "rw_mbytes_per_sec": 0, 00:23:27.708 "r_mbytes_per_sec": 0, 00:23:27.708 "w_mbytes_per_sec": 0 00:23:27.708 }, 00:23:27.708 "claimed": true, 00:23:27.708 "claim_type": "exclusive_write", 00:23:27.708 "zoned": false, 00:23:27.708 "supported_io_types": { 00:23:27.708 "read": true, 00:23:27.708 "write": true, 00:23:27.708 "unmap": true, 00:23:27.708 "write_zeroes": true, 00:23:27.708 "flush": true, 00:23:27.708 "reset": true, 00:23:27.708 "compare": false, 00:23:27.708 "compare_and_write": false, 00:23:27.708 "abort": true, 00:23:27.708 "nvme_admin": false, 00:23:27.708 "nvme_io": false 00:23:27.708 }, 00:23:27.708 "memory_domains": [ 00:23:27.708 { 00:23:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.708 "dma_device_type": 2 00:23:27.708 } 00:23:27.708 ], 00:23:27.708 "driver_specific": {} 00:23:27.708 } 00:23:27.708 ] 00:23:27.708 10:37:21 -- common/autotest_common.sh@895 -- # return 0 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.708 10:37:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.966 10:37:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.966 "name": "Existed_Raid", 00:23:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.966 "strip_size_kb": 64, 00:23:27.966 "state": "configuring", 00:23:27.966 "raid_level": "raid5f", 00:23:27.966 "superblock": false, 00:23:27.966 "num_base_bdevs": 3, 00:23:27.966 "num_base_bdevs_discovered": 1, 00:23:27.966 "num_base_bdevs_operational": 3, 00:23:27.966 "base_bdevs_list": [ 00:23:27.966 { 00:23:27.966 "name": "BaseBdev1", 00:23:27.966 "uuid": "21752d0e-77cc-4282-9316-9dbb7e4d4249", 00:23:27.966 "is_configured": true, 00:23:27.966 "data_offset": 0, 00:23:27.966 "data_size": 65536 00:23:27.966 }, 00:23:27.966 { 00:23:27.966 "name": "BaseBdev2", 00:23:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.966 "is_configured": false, 00:23:27.966 "data_offset": 0, 00:23:27.966 "data_size": 0 00:23:27.966 }, 00:23:27.966 { 00:23:27.966 "name": "BaseBdev3", 00:23:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.966 "is_configured": false, 00:23:27.966 "data_offset": 0, 00:23:27.966 "data_size": 0 00:23:27.966 } 00:23:27.966 ] 00:23:27.966 }' 00:23:27.966 10:37:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.966 10:37:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.532 10:37:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:28.790 [2024-07-12 10:37:22.667795] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:28.790 [2024-07-12 10:37:22.667831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:28.790 10:37:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:28.790 10:37:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:29.047 [2024-07-12 10:37:22.919877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.047 [2024-07-12 10:37:22.921728] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:29.047 [2024-07-12 10:37:22.921784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:29.047 [2024-07-12 10:37:22.921796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:29.047 [2024-07-12 10:37:22.921820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.047 10:37:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.314 10:37:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.314 "name": "Existed_Raid", 00:23:29.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.314 "strip_size_kb": 64, 00:23:29.314 "state": "configuring", 00:23:29.314 "raid_level": "raid5f", 00:23:29.314 "superblock": false, 00:23:29.314 "num_base_bdevs": 3, 00:23:29.314 "num_base_bdevs_discovered": 1, 00:23:29.314 "num_base_bdevs_operational": 3, 00:23:29.314 "base_bdevs_list": [ 00:23:29.314 { 00:23:29.314 "name": "BaseBdev1", 00:23:29.314 "uuid": "21752d0e-77cc-4282-9316-9dbb7e4d4249", 00:23:29.314 "is_configured": true, 00:23:29.314 "data_offset": 0, 00:23:29.314 "data_size": 65536 00:23:29.314 }, 00:23:29.314 { 00:23:29.314 "name": "BaseBdev2", 00:23:29.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.314 "is_configured": false, 00:23:29.314 "data_offset": 0, 00:23:29.314 "data_size": 0 00:23:29.314 }, 00:23:29.314 { 00:23:29.314 "name": "BaseBdev3", 00:23:29.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.314 "is_configured": false, 00:23:29.314 "data_offset": 0, 00:23:29.314 "data_size": 0 00:23:29.314 } 00:23:29.314 ] 00:23:29.314 }' 00:23:29.314 10:37:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.314 10:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 10:37:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:30.250 [2024-07-12 10:37:24.099068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:30.250 BaseBdev2 00:23:30.250 10:37:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:30.250 10:37:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:30.250 10:37:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:30.250 10:37:24 -- common/autotest_common.sh@889 -- # local i 00:23:30.250 10:37:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:30.250 10:37:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:30.250 10:37:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.507 10:37:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:30.765 [ 00:23:30.765 { 00:23:30.765 "name": "BaseBdev2", 00:23:30.765 "aliases": [ 00:23:30.765 "04001dcf-6623-476e-8267-424c926ebf20" 00:23:30.765 ], 00:23:30.765 "product_name": "Malloc disk", 00:23:30.765 "block_size": 512, 00:23:30.765 "num_blocks": 65536, 00:23:30.765 "uuid": "04001dcf-6623-476e-8267-424c926ebf20", 00:23:30.765 "assigned_rate_limits": { 00:23:30.765 "rw_ios_per_sec": 0, 00:23:30.765 "rw_mbytes_per_sec": 0, 00:23:30.765 "r_mbytes_per_sec": 0, 00:23:30.765 "w_mbytes_per_sec": 0 00:23:30.765 }, 00:23:30.765 "claimed": true, 00:23:30.765 "claim_type": "exclusive_write", 00:23:30.765 "zoned": false, 00:23:30.765 "supported_io_types": { 00:23:30.765 "read": true, 00:23:30.765 "write": true, 00:23:30.765 "unmap": true, 00:23:30.765 "write_zeroes": true, 00:23:30.765 "flush": true, 00:23:30.765 "reset": true, 00:23:30.765 "compare": false, 00:23:30.765 "compare_and_write": false, 00:23:30.765 "abort": true, 00:23:30.765 "nvme_admin": false, 00:23:30.765 "nvme_io": false 00:23:30.765 }, 00:23:30.765 "memory_domains": [ 00:23:30.765 { 00:23:30.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.765 "dma_device_type": 2 00:23:30.765 } 00:23:30.765 ], 00:23:30.765 "driver_specific": {} 00:23:30.765 } 00:23:30.765 ] 00:23:30.765 10:37:24 -- common/autotest_common.sh@895 -- # return 0 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:30.765 10:37:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:30.766 10:37:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:30.766 10:37:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.766 10:37:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.024 10:37:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.024 "name": "Existed_Raid", 00:23:31.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.024 "strip_size_kb": 64, 00:23:31.024 "state": "configuring", 00:23:31.024 "raid_level": "raid5f", 00:23:31.024 "superblock": false, 00:23:31.024 "num_base_bdevs": 3, 00:23:31.024 "num_base_bdevs_discovered": 2, 00:23:31.024 "num_base_bdevs_operational": 3, 00:23:31.024 "base_bdevs_list": [ 00:23:31.024 { 00:23:31.024 "name": "BaseBdev1", 00:23:31.024 "uuid": "21752d0e-77cc-4282-9316-9dbb7e4d4249", 00:23:31.024 "is_configured": true, 00:23:31.024 "data_offset": 0, 00:23:31.024 "data_size": 65536 00:23:31.024 }, 00:23:31.024 { 00:23:31.024 "name": "BaseBdev2", 00:23:31.024 "uuid": "04001dcf-6623-476e-8267-424c926ebf20", 00:23:31.024 "is_configured": true, 00:23:31.024 "data_offset": 0, 00:23:31.024 "data_size": 65536 00:23:31.024 }, 00:23:31.024 { 00:23:31.024 "name": "BaseBdev3", 00:23:31.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.024 "is_configured": false, 00:23:31.024 "data_offset": 0, 00:23:31.024 "data_size": 0 00:23:31.024 } 00:23:31.024 ] 00:23:31.024 }' 00:23:31.024 10:37:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.024 10:37:24 -- common/autotest_common.sh@10 -- # set +x 00:23:31.590 10:37:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:31.848 [2024-07-12 10:37:25.610888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.848 [2024-07-12 10:37:25.611152] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:31.848 [2024-07-12 10:37:25.611195] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:31.848 [2024-07-12 10:37:25.611424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:23:31.848 [2024-07-12 10:37:25.615836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:31.848 [2024-07-12 10:37:25.615963] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:31.848 [2024-07-12 10:37:25.616343] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.848 BaseBdev3 00:23:31.848 10:37:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:31.848 10:37:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:31.848 10:37:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:31.848 10:37:25 -- common/autotest_common.sh@889 -- # local i 00:23:31.848 10:37:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:31.848 10:37:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:31.848 10:37:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.106 10:37:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:32.364 [ 00:23:32.364 { 00:23:32.364 "name": "BaseBdev3", 00:23:32.364 "aliases": [ 00:23:32.364 "80bd8ed1-ea1d-4c6d-956d-680a3c1f1f9e" 00:23:32.364 ], 00:23:32.364 "product_name": "Malloc disk", 00:23:32.364 "block_size": 512, 00:23:32.364 "num_blocks": 65536, 00:23:32.364 "uuid": "80bd8ed1-ea1d-4c6d-956d-680a3c1f1f9e", 00:23:32.364 "assigned_rate_limits": { 00:23:32.364 "rw_ios_per_sec": 0, 00:23:32.364 "rw_mbytes_per_sec": 0, 00:23:32.364 "r_mbytes_per_sec": 0, 00:23:32.364 "w_mbytes_per_sec": 0 00:23:32.364 }, 00:23:32.364 "claimed": true, 00:23:32.364 "claim_type": "exclusive_write", 00:23:32.364 "zoned": false, 00:23:32.364 "supported_io_types": { 00:23:32.364 "read": true, 00:23:32.364 "write": true, 00:23:32.364 "unmap": true, 00:23:32.364 "write_zeroes": true, 00:23:32.364 "flush": true, 00:23:32.364 "reset": true, 00:23:32.364 "compare": false, 00:23:32.364 "compare_and_write": false, 00:23:32.364 "abort": true, 00:23:32.364 "nvme_admin": false, 00:23:32.364 "nvme_io": false 00:23:32.364 }, 00:23:32.364 "memory_domains": [ 00:23:32.364 { 00:23:32.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.364 "dma_device_type": 2 00:23:32.364 } 00:23:32.364 ], 00:23:32.364 "driver_specific": {} 00:23:32.364 } 00:23:32.364 ] 00:23:32.364 10:37:26 -- common/autotest_common.sh@895 -- # return 0 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.364 10:37:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.622 10:37:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.622 "name": "Existed_Raid", 00:23:32.622 "uuid": "533c8c64-79d9-410d-b1bc-c95d86ecbaf9", 00:23:32.622 "strip_size_kb": 64, 00:23:32.622 "state": "online", 00:23:32.622 "raid_level": "raid5f", 00:23:32.622 "superblock": false, 00:23:32.622 "num_base_bdevs": 3, 00:23:32.622 "num_base_bdevs_discovered": 3, 00:23:32.622 "num_base_bdevs_operational": 3, 00:23:32.622 "base_bdevs_list": [ 00:23:32.622 { 00:23:32.622 "name": "BaseBdev1", 00:23:32.622 "uuid": "21752d0e-77cc-4282-9316-9dbb7e4d4249", 00:23:32.622 "is_configured": true, 00:23:32.622 "data_offset": 0, 00:23:32.622 "data_size": 65536 00:23:32.622 }, 00:23:32.622 { 00:23:32.622 "name": "BaseBdev2", 00:23:32.622 "uuid": "04001dcf-6623-476e-8267-424c926ebf20", 00:23:32.622 "is_configured": true, 00:23:32.622 "data_offset": 0, 00:23:32.622 "data_size": 65536 00:23:32.622 }, 00:23:32.622 { 00:23:32.622 "name": "BaseBdev3", 00:23:32.622 "uuid": "80bd8ed1-ea1d-4c6d-956d-680a3c1f1f9e", 00:23:32.622 "is_configured": true, 00:23:32.622 "data_offset": 0, 00:23:32.622 "data_size": 65536 00:23:32.622 } 00:23:32.622 ] 00:23:32.622 }' 00:23:32.622 10:37:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.622 10:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 10:37:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:33.445 [2024-07-12 10:37:27.195854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.445 10:37:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.703 10:37:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.703 "name": "Existed_Raid", 00:23:33.703 "uuid": "533c8c64-79d9-410d-b1bc-c95d86ecbaf9", 00:23:33.703 "strip_size_kb": 64, 00:23:33.703 "state": "online", 00:23:33.703 "raid_level": "raid5f", 00:23:33.703 "superblock": false, 00:23:33.703 "num_base_bdevs": 3, 00:23:33.703 "num_base_bdevs_discovered": 2, 00:23:33.703 "num_base_bdevs_operational": 2, 00:23:33.703 "base_bdevs_list": [ 00:23:33.703 { 00:23:33.703 "name": null, 00:23:33.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.703 "is_configured": false, 00:23:33.703 "data_offset": 0, 00:23:33.703 "data_size": 65536 00:23:33.703 }, 00:23:33.703 { 00:23:33.703 "name": "BaseBdev2", 00:23:33.703 "uuid": "04001dcf-6623-476e-8267-424c926ebf20", 00:23:33.703 "is_configured": true, 00:23:33.703 "data_offset": 0, 00:23:33.703 "data_size": 65536 00:23:33.703 }, 00:23:33.703 { 00:23:33.703 "name": "BaseBdev3", 00:23:33.703 "uuid": "80bd8ed1-ea1d-4c6d-956d-680a3c1f1f9e", 00:23:33.703 "is_configured": true, 00:23:33.703 "data_offset": 0, 00:23:33.703 "data_size": 65536 00:23:33.703 } 00:23:33.703 ] 00:23:33.703 }' 00:23:33.703 10:37:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.703 10:37:27 -- common/autotest_common.sh@10 -- # set +x 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:34.638 10:37:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:34.895 [2024-07-12 10:37:28.728267] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:34.895 [2024-07-12 10:37:28.728406] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.895 [2024-07-12 10:37:28.728552] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.895 10:37:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:34.895 10:37:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:34.895 10:37:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.895 10:37:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:35.461 [2024-07-12 10:37:29.255721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:35.461 [2024-07-12 10:37:29.256057] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.461 10:37:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:35.719 10:37:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:35.719 10:37:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:35.719 10:37:29 -- bdev/bdev_raid.sh@287 -- # killprocess 130611 00:23:35.719 10:37:29 -- common/autotest_common.sh@926 -- # '[' -z 130611 ']' 00:23:35.719 10:37:29 -- common/autotest_common.sh@930 -- # kill -0 130611 00:23:35.719 10:37:29 -- common/autotest_common.sh@931 -- # uname 00:23:35.719 10:37:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:35.719 10:37:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130611 00:23:35.719 killing process with pid 130611 00:23:35.719 10:37:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:35.719 10:37:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:35.719 10:37:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130611' 00:23:35.719 10:37:29 -- common/autotest_common.sh@945 -- # kill 130611 00:23:35.719 10:37:29 -- common/autotest_common.sh@950 -- # wait 130611 00:23:35.719 [2024-07-12 10:37:29.609005] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:35.719 [2024-07-12 10:37:29.609169] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:36.654 ************************************ 00:23:36.654 END TEST raid5f_state_function_test 00:23:36.654 ************************************ 00:23:36.654 10:37:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:36.654 00:23:36.654 real 0m12.033s 00:23:36.654 user 0m21.462s 00:23:36.654 sys 0m1.421s 00:23:36.654 10:37:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.654 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:23:36.654 10:37:30 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:36.654 10:37:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:36.654 10:37:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:36.654 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:23:36.913 ************************************ 00:23:36.913 START TEST raid5f_state_function_test_sb 00:23:36.913 ************************************ 00:23:36.913 10:37:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=131007 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131007' 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:36.913 Process raid pid: 131007 00:23:36.913 10:37:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131007 /var/tmp/spdk-raid.sock 00:23:36.913 10:37:30 -- common/autotest_common.sh@819 -- # '[' -z 131007 ']' 00:23:36.913 10:37:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:36.913 10:37:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:36.913 10:37:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:36.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:36.913 10:37:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:36.913 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:23:36.913 [2024-07-12 10:37:30.644762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:36.913 [2024-07-12 10:37:30.645161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.913 [2024-07-12 10:37:30.801241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.172 [2024-07-12 10:37:31.029045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.430 [2024-07-12 10:37:31.217912] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:37.688 10:37:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:37.688 10:37:31 -- common/autotest_common.sh@852 -- # return 0 00:23:37.688 10:37:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:37.946 [2024-07-12 10:37:31.690155] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:37.946 [2024-07-12 10:37:31.690371] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:37.946 [2024-07-12 10:37:31.690499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:37.946 [2024-07-12 10:37:31.690556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:37.946 [2024-07-12 10:37:31.690776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:37.946 [2024-07-12 10:37:31.690868] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.946 10:37:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.204 10:37:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.204 "name": "Existed_Raid", 00:23:38.204 "uuid": "d4d70097-dac9-439a-9b63-f4588ba4dbe6", 00:23:38.204 "strip_size_kb": 64, 00:23:38.204 "state": "configuring", 00:23:38.204 "raid_level": "raid5f", 00:23:38.204 "superblock": true, 00:23:38.204 "num_base_bdevs": 3, 00:23:38.204 "num_base_bdevs_discovered": 0, 00:23:38.204 "num_base_bdevs_operational": 3, 00:23:38.204 "base_bdevs_list": [ 00:23:38.204 { 00:23:38.204 "name": "BaseBdev1", 00:23:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.204 "is_configured": false, 00:23:38.204 "data_offset": 0, 00:23:38.204 "data_size": 0 00:23:38.204 }, 00:23:38.204 { 00:23:38.204 "name": "BaseBdev2", 00:23:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.204 "is_configured": false, 00:23:38.204 "data_offset": 0, 00:23:38.204 "data_size": 0 00:23:38.204 }, 00:23:38.204 { 00:23:38.204 "name": "BaseBdev3", 00:23:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.204 "is_configured": false, 00:23:38.204 "data_offset": 0, 00:23:38.204 "data_size": 0 00:23:38.204 } 00:23:38.204 ] 00:23:38.204 }' 00:23:38.204 10:37:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.204 10:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.771 10:37:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:39.029 [2024-07-12 10:37:32.754160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.029 [2024-07-12 10:37:32.754303] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:39.029 10:37:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:39.286 [2024-07-12 10:37:32.990263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:39.286 [2024-07-12 10:37:32.990452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:39.286 [2024-07-12 10:37:32.990596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:39.286 [2024-07-12 10:37:32.990651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:39.286 [2024-07-12 10:37:32.990745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:39.286 [2024-07-12 10:37:32.990812] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:39.286 10:37:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:39.543 [2024-07-12 10:37:33.204273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.543 BaseBdev1 00:23:39.543 10:37:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:39.543 10:37:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:39.543 10:37:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:39.543 10:37:33 -- common/autotest_common.sh@889 -- # local i 00:23:39.543 10:37:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:39.543 10:37:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:39.543 10:37:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:39.543 10:37:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:39.800 [ 00:23:39.800 { 00:23:39.800 "name": "BaseBdev1", 00:23:39.800 "aliases": [ 00:23:39.800 "4b7c7875-f705-4480-a025-7de8318c1e87" 00:23:39.800 ], 00:23:39.800 "product_name": "Malloc disk", 00:23:39.800 "block_size": 512, 00:23:39.800 "num_blocks": 65536, 00:23:39.800 "uuid": "4b7c7875-f705-4480-a025-7de8318c1e87", 00:23:39.800 "assigned_rate_limits": { 00:23:39.800 "rw_ios_per_sec": 0, 00:23:39.800 "rw_mbytes_per_sec": 0, 00:23:39.800 "r_mbytes_per_sec": 0, 00:23:39.800 "w_mbytes_per_sec": 0 00:23:39.800 }, 00:23:39.800 "claimed": true, 00:23:39.800 "claim_type": "exclusive_write", 00:23:39.800 "zoned": false, 00:23:39.800 "supported_io_types": { 00:23:39.800 "read": true, 00:23:39.800 "write": true, 00:23:39.800 "unmap": true, 00:23:39.800 "write_zeroes": true, 00:23:39.800 "flush": true, 00:23:39.800 "reset": true, 00:23:39.800 "compare": false, 00:23:39.800 "compare_and_write": false, 00:23:39.800 "abort": true, 00:23:39.800 "nvme_admin": false, 00:23:39.800 "nvme_io": false 00:23:39.800 }, 00:23:39.800 "memory_domains": [ 00:23:39.800 { 00:23:39.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.800 "dma_device_type": 2 00:23:39.800 } 00:23:39.800 ], 00:23:39.800 "driver_specific": {} 00:23:39.800 } 00:23:39.800 ] 00:23:39.800 10:37:33 -- common/autotest_common.sh@895 -- # return 0 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.800 10:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.057 10:37:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.057 "name": "Existed_Raid", 00:23:40.057 "uuid": "b7fad03f-e78e-4a89-a052-afb4864c1794", 00:23:40.057 "strip_size_kb": 64, 00:23:40.057 "state": "configuring", 00:23:40.057 "raid_level": "raid5f", 00:23:40.057 "superblock": true, 00:23:40.057 "num_base_bdevs": 3, 00:23:40.057 "num_base_bdevs_discovered": 1, 00:23:40.057 "num_base_bdevs_operational": 3, 00:23:40.057 "base_bdevs_list": [ 00:23:40.057 { 00:23:40.057 "name": "BaseBdev1", 00:23:40.057 "uuid": "4b7c7875-f705-4480-a025-7de8318c1e87", 00:23:40.057 "is_configured": true, 00:23:40.057 "data_offset": 2048, 00:23:40.057 "data_size": 63488 00:23:40.057 }, 00:23:40.057 { 00:23:40.057 "name": "BaseBdev2", 00:23:40.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.057 "is_configured": false, 00:23:40.057 "data_offset": 0, 00:23:40.057 "data_size": 0 00:23:40.057 }, 00:23:40.057 { 00:23:40.057 "name": "BaseBdev3", 00:23:40.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.057 "is_configured": false, 00:23:40.057 "data_offset": 0, 00:23:40.057 "data_size": 0 00:23:40.057 } 00:23:40.057 ] 00:23:40.057 }' 00:23:40.057 10:37:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.057 10:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:40.989 10:37:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:40.989 [2024-07-12 10:37:34.776549] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:40.989 [2024-07-12 10:37:34.776698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:40.989 10:37:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:40.989 10:37:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:41.247 10:37:35 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:41.505 BaseBdev1 00:23:41.505 10:37:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:41.505 10:37:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:41.505 10:37:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:41.505 10:37:35 -- common/autotest_common.sh@889 -- # local i 00:23:41.505 10:37:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:41.505 10:37:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:41.505 10:37:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:41.505 10:37:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:41.764 [ 00:23:41.764 { 00:23:41.764 "name": "BaseBdev1", 00:23:41.764 "aliases": [ 00:23:41.764 "398a4a4d-e6ef-4d7c-95ef-0c4f97517edf" 00:23:41.764 ], 00:23:41.764 "product_name": "Malloc disk", 00:23:41.764 "block_size": 512, 00:23:41.764 "num_blocks": 65536, 00:23:41.764 "uuid": "398a4a4d-e6ef-4d7c-95ef-0c4f97517edf", 00:23:41.764 "assigned_rate_limits": { 00:23:41.764 "rw_ios_per_sec": 0, 00:23:41.764 "rw_mbytes_per_sec": 0, 00:23:41.764 "r_mbytes_per_sec": 0, 00:23:41.764 "w_mbytes_per_sec": 0 00:23:41.764 }, 00:23:41.764 "claimed": false, 00:23:41.764 "zoned": false, 00:23:41.764 "supported_io_types": { 00:23:41.764 "read": true, 00:23:41.764 "write": true, 00:23:41.764 "unmap": true, 00:23:41.764 "write_zeroes": true, 00:23:41.764 "flush": true, 00:23:41.764 "reset": true, 00:23:41.764 "compare": false, 00:23:41.764 "compare_and_write": false, 00:23:41.764 "abort": true, 00:23:41.764 "nvme_admin": false, 00:23:41.764 "nvme_io": false 00:23:41.764 }, 00:23:41.764 "memory_domains": [ 00:23:41.764 { 00:23:41.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.764 "dma_device_type": 2 00:23:41.764 } 00:23:41.764 ], 00:23:41.764 "driver_specific": {} 00:23:41.764 } 00:23:41.764 ] 00:23:41.764 10:37:35 -- common/autotest_common.sh@895 -- # return 0 00:23:41.764 10:37:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:42.022 [2024-07-12 10:37:35.747061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:42.022 [2024-07-12 10:37:35.749076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:42.022 [2024-07-12 10:37:35.749263] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:42.022 [2024-07-12 10:37:35.749409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:42.022 [2024-07-12 10:37:35.749473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.022 10:37:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.280 10:37:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.280 "name": "Existed_Raid", 00:23:42.280 "uuid": "ec955a8e-20f9-4e6b-a85f-319c050959ee", 00:23:42.280 "strip_size_kb": 64, 00:23:42.280 "state": "configuring", 00:23:42.280 "raid_level": "raid5f", 00:23:42.280 "superblock": true, 00:23:42.280 "num_base_bdevs": 3, 00:23:42.280 "num_base_bdevs_discovered": 1, 00:23:42.280 "num_base_bdevs_operational": 3, 00:23:42.280 "base_bdevs_list": [ 00:23:42.280 { 00:23:42.280 "name": "BaseBdev1", 00:23:42.280 "uuid": "398a4a4d-e6ef-4d7c-95ef-0c4f97517edf", 00:23:42.280 "is_configured": true, 00:23:42.280 "data_offset": 2048, 00:23:42.280 "data_size": 63488 00:23:42.280 }, 00:23:42.280 { 00:23:42.280 "name": "BaseBdev2", 00:23:42.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.280 "is_configured": false, 00:23:42.280 "data_offset": 0, 00:23:42.280 "data_size": 0 00:23:42.280 }, 00:23:42.280 { 00:23:42.280 "name": "BaseBdev3", 00:23:42.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.280 "is_configured": false, 00:23:42.280 "data_offset": 0, 00:23:42.280 "data_size": 0 00:23:42.280 } 00:23:42.280 ] 00:23:42.280 }' 00:23:42.280 10:37:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.280 10:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:42.847 10:37:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:42.847 [2024-07-12 10:37:36.756476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:42.847 BaseBdev2 00:23:43.105 10:37:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:43.105 10:37:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:43.105 10:37:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:43.105 10:37:36 -- common/autotest_common.sh@889 -- # local i 00:23:43.105 10:37:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:43.105 10:37:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:43.105 10:37:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:43.105 10:37:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:43.363 [ 00:23:43.363 { 00:23:43.363 "name": "BaseBdev2", 00:23:43.363 "aliases": [ 00:23:43.363 "f0e1a6d4-f158-4d13-ba99-795ab0264166" 00:23:43.363 ], 00:23:43.363 "product_name": "Malloc disk", 00:23:43.363 "block_size": 512, 00:23:43.363 "num_blocks": 65536, 00:23:43.363 "uuid": "f0e1a6d4-f158-4d13-ba99-795ab0264166", 00:23:43.363 "assigned_rate_limits": { 00:23:43.363 "rw_ios_per_sec": 0, 00:23:43.363 "rw_mbytes_per_sec": 0, 00:23:43.363 "r_mbytes_per_sec": 0, 00:23:43.363 "w_mbytes_per_sec": 0 00:23:43.363 }, 00:23:43.363 "claimed": true, 00:23:43.363 "claim_type": "exclusive_write", 00:23:43.363 "zoned": false, 00:23:43.363 "supported_io_types": { 00:23:43.363 "read": true, 00:23:43.363 "write": true, 00:23:43.363 "unmap": true, 00:23:43.363 "write_zeroes": true, 00:23:43.363 "flush": true, 00:23:43.363 "reset": true, 00:23:43.363 "compare": false, 00:23:43.363 "compare_and_write": false, 00:23:43.363 "abort": true, 00:23:43.363 "nvme_admin": false, 00:23:43.363 "nvme_io": false 00:23:43.363 }, 00:23:43.363 "memory_domains": [ 00:23:43.363 { 00:23:43.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.363 "dma_device_type": 2 00:23:43.363 } 00:23:43.363 ], 00:23:43.363 "driver_specific": {} 00:23:43.363 } 00:23:43.363 ] 00:23:43.363 10:37:37 -- common/autotest_common.sh@895 -- # return 0 00:23:43.363 10:37:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:43.363 10:37:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.364 10:37:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.622 10:37:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.622 "name": "Existed_Raid", 00:23:43.622 "uuid": "ec955a8e-20f9-4e6b-a85f-319c050959ee", 00:23:43.622 "strip_size_kb": 64, 00:23:43.622 "state": "configuring", 00:23:43.622 "raid_level": "raid5f", 00:23:43.622 "superblock": true, 00:23:43.622 "num_base_bdevs": 3, 00:23:43.622 "num_base_bdevs_discovered": 2, 00:23:43.622 "num_base_bdevs_operational": 3, 00:23:43.622 "base_bdevs_list": [ 00:23:43.622 { 00:23:43.622 "name": "BaseBdev1", 00:23:43.622 "uuid": "398a4a4d-e6ef-4d7c-95ef-0c4f97517edf", 00:23:43.622 "is_configured": true, 00:23:43.622 "data_offset": 2048, 00:23:43.622 "data_size": 63488 00:23:43.622 }, 00:23:43.622 { 00:23:43.622 "name": "BaseBdev2", 00:23:43.622 "uuid": "f0e1a6d4-f158-4d13-ba99-795ab0264166", 00:23:43.622 "is_configured": true, 00:23:43.622 "data_offset": 2048, 00:23:43.622 "data_size": 63488 00:23:43.622 }, 00:23:43.622 { 00:23:43.622 "name": "BaseBdev3", 00:23:43.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.622 "is_configured": false, 00:23:43.622 "data_offset": 0, 00:23:43.622 "data_size": 0 00:23:43.622 } 00:23:43.622 ] 00:23:43.622 }' 00:23:43.622 10:37:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.622 10:37:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.215 10:37:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:44.489 [2024-07-12 10:37:38.359776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.489 [2024-07-12 10:37:38.360267] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:44.489 [2024-07-12 10:37:38.360386] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:44.489 BaseBdev3 00:23:44.489 [2024-07-12 10:37:38.360536] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:44.489 [2024-07-12 10:37:38.365083] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:44.489 [2024-07-12 10:37:38.365221] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:44.489 [2024-07-12 10:37:38.365511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.489 10:37:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:44.489 10:37:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:44.489 10:37:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:44.489 10:37:38 -- common/autotest_common.sh@889 -- # local i 00:23:44.489 10:37:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:44.489 10:37:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:44.489 10:37:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.771 10:37:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:45.030 [ 00:23:45.030 { 00:23:45.030 "name": "BaseBdev3", 00:23:45.030 "aliases": [ 00:23:45.030 "fc5f2237-1c0e-4b8b-8080-d37b1e093787" 00:23:45.030 ], 00:23:45.030 "product_name": "Malloc disk", 00:23:45.030 "block_size": 512, 00:23:45.030 "num_blocks": 65536, 00:23:45.030 "uuid": "fc5f2237-1c0e-4b8b-8080-d37b1e093787", 00:23:45.030 "assigned_rate_limits": { 00:23:45.030 "rw_ios_per_sec": 0, 00:23:45.030 "rw_mbytes_per_sec": 0, 00:23:45.030 "r_mbytes_per_sec": 0, 00:23:45.030 "w_mbytes_per_sec": 0 00:23:45.030 }, 00:23:45.030 "claimed": true, 00:23:45.030 "claim_type": "exclusive_write", 00:23:45.030 "zoned": false, 00:23:45.030 "supported_io_types": { 00:23:45.030 "read": true, 00:23:45.030 "write": true, 00:23:45.030 "unmap": true, 00:23:45.030 "write_zeroes": true, 00:23:45.030 "flush": true, 00:23:45.030 "reset": true, 00:23:45.030 "compare": false, 00:23:45.030 "compare_and_write": false, 00:23:45.030 "abort": true, 00:23:45.030 "nvme_admin": false, 00:23:45.030 "nvme_io": false 00:23:45.030 }, 00:23:45.030 "memory_domains": [ 00:23:45.030 { 00:23:45.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.030 "dma_device_type": 2 00:23:45.030 } 00:23:45.030 ], 00:23:45.030 "driver_specific": {} 00:23:45.030 } 00:23:45.030 ] 00:23:45.030 10:37:38 -- common/autotest_common.sh@895 -- # return 0 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.030 10:37:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.030 "name": "Existed_Raid", 00:23:45.030 "uuid": "ec955a8e-20f9-4e6b-a85f-319c050959ee", 00:23:45.030 "strip_size_kb": 64, 00:23:45.030 "state": "online", 00:23:45.030 "raid_level": "raid5f", 00:23:45.030 "superblock": true, 00:23:45.030 "num_base_bdevs": 3, 00:23:45.030 "num_base_bdevs_discovered": 3, 00:23:45.030 "num_base_bdevs_operational": 3, 00:23:45.030 "base_bdevs_list": [ 00:23:45.030 { 00:23:45.030 "name": "BaseBdev1", 00:23:45.031 "uuid": "398a4a4d-e6ef-4d7c-95ef-0c4f97517edf", 00:23:45.031 "is_configured": true, 00:23:45.031 "data_offset": 2048, 00:23:45.031 "data_size": 63488 00:23:45.031 }, 00:23:45.031 { 00:23:45.031 "name": "BaseBdev2", 00:23:45.031 "uuid": "f0e1a6d4-f158-4d13-ba99-795ab0264166", 00:23:45.031 "is_configured": true, 00:23:45.031 "data_offset": 2048, 00:23:45.031 "data_size": 63488 00:23:45.031 }, 00:23:45.031 { 00:23:45.031 "name": "BaseBdev3", 00:23:45.031 "uuid": "fc5f2237-1c0e-4b8b-8080-d37b1e093787", 00:23:45.031 "is_configured": true, 00:23:45.031 "data_offset": 2048, 00:23:45.031 "data_size": 63488 00:23:45.031 } 00:23:45.031 ] 00:23:45.031 }' 00:23:45.031 10:37:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.031 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:23:45.966 10:37:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:45.966 [2024-07-12 10:37:39.807031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.224 10:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.224 10:37:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:46.224 "name": "Existed_Raid", 00:23:46.224 "uuid": "ec955a8e-20f9-4e6b-a85f-319c050959ee", 00:23:46.224 "strip_size_kb": 64, 00:23:46.224 "state": "online", 00:23:46.224 "raid_level": "raid5f", 00:23:46.224 "superblock": true, 00:23:46.224 "num_base_bdevs": 3, 00:23:46.224 "num_base_bdevs_discovered": 2, 00:23:46.224 "num_base_bdevs_operational": 2, 00:23:46.224 "base_bdevs_list": [ 00:23:46.224 { 00:23:46.224 "name": null, 00:23:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.224 "is_configured": false, 00:23:46.224 "data_offset": 2048, 00:23:46.224 "data_size": 63488 00:23:46.224 }, 00:23:46.224 { 00:23:46.224 "name": "BaseBdev2", 00:23:46.224 "uuid": "f0e1a6d4-f158-4d13-ba99-795ab0264166", 00:23:46.224 "is_configured": true, 00:23:46.224 "data_offset": 2048, 00:23:46.224 "data_size": 63488 00:23:46.224 }, 00:23:46.224 { 00:23:46.224 "name": "BaseBdev3", 00:23:46.224 "uuid": "fc5f2237-1c0e-4b8b-8080-d37b1e093787", 00:23:46.224 "is_configured": true, 00:23:46.224 "data_offset": 2048, 00:23:46.224 "data_size": 63488 00:23:46.224 } 00:23:46.224 ] 00:23:46.224 }' 00:23:46.224 10:37:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:46.224 10:37:40 -- common/autotest_common.sh@10 -- # set +x 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:47.157 10:37:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:47.415 [2024-07-12 10:37:41.106079] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.415 [2024-07-12 10:37:41.106219] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.415 [2024-07-12 10:37:41.106359] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.415 10:37:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:47.415 10:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:47.415 10:37:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.415 10:37:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:47.673 10:37:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:47.673 10:37:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:47.673 10:37:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:47.931 [2024-07-12 10:37:41.653128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:47.931 [2024-07-12 10:37:41.653349] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:47.931 10:37:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:47.931 10:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:47.931 10:37:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.931 10:37:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:48.191 10:37:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:48.191 10:37:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:48.191 10:37:41 -- bdev/bdev_raid.sh@287 -- # killprocess 131007 00:23:48.191 10:37:41 -- common/autotest_common.sh@926 -- # '[' -z 131007 ']' 00:23:48.191 10:37:41 -- common/autotest_common.sh@930 -- # kill -0 131007 00:23:48.191 10:37:41 -- common/autotest_common.sh@931 -- # uname 00:23:48.191 10:37:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:48.191 10:37:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131007 00:23:48.191 killing process with pid 131007 00:23:48.191 10:37:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:48.191 10:37:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:48.191 10:37:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131007' 00:23:48.191 10:37:41 -- common/autotest_common.sh@945 -- # kill 131007 00:23:48.191 10:37:41 -- common/autotest_common.sh@950 -- # wait 131007 00:23:48.191 [2024-07-12 10:37:41.939320] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.191 [2024-07-12 10:37:41.939520] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.125 ************************************ 00:23:49.125 END TEST raid5f_state_function_test_sb 00:23:49.125 ************************************ 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:49.125 00:23:49.125 real 0m12.268s 00:23:49.125 user 0m21.827s 00:23:49.125 sys 0m1.385s 00:23:49.125 10:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.125 10:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:49.125 10:37:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:49.125 10:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:49.125 10:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.125 ************************************ 00:23:49.125 START TEST raid5f_superblock_test 00:23:49.125 ************************************ 00:23:49.125 10:37:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=131415 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:49.125 10:37:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131415 /var/tmp/spdk-raid.sock 00:23:49.125 10:37:42 -- common/autotest_common.sh@819 -- # '[' -z 131415 ']' 00:23:49.125 10:37:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:49.125 10:37:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:49.125 10:37:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:49.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:49.125 10:37:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:49.125 10:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.125 [2024-07-12 10:37:42.974134] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:49.125 [2024-07-12 10:37:42.974458] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131415 ] 00:23:49.383 [2024-07-12 10:37:43.143857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.643 [2024-07-12 10:37:43.369994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.643 [2024-07-12 10:37:43.554266] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.210 10:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.210 10:37:43 -- common/autotest_common.sh@852 -- # return 0 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.210 10:37:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:50.210 malloc1 00:23:50.210 10:37:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:50.468 [2024-07-12 10:37:44.352711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:50.468 [2024-07-12 10:37:44.352944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.468 [2024-07-12 10:37:44.353084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:50.468 [2024-07-12 10:37:44.353215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.468 [2024-07-12 10:37:44.355681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.468 [2024-07-12 10:37:44.355853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:50.468 pt1 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.468 10:37:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:50.727 malloc2 00:23:50.985 10:37:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.985 [2024-07-12 10:37:44.878662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.985 [2024-07-12 10:37:44.878860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.985 [2024-07-12 10:37:44.878936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:50.985 [2024-07-12 10:37:44.879082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.985 [2024-07-12 10:37:44.881407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.985 [2024-07-12 10:37:44.881567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.985 pt2 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.986 10:37:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:51.243 malloc3 00:23:51.243 10:37:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:51.500 [2024-07-12 10:37:45.275121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:51.500 [2024-07-12 10:37:45.275224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.500 [2024-07-12 10:37:45.275304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:51.500 [2024-07-12 10:37:45.275578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.500 [2024-07-12 10:37:45.277941] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.500 [2024-07-12 10:37:45.278092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:51.500 pt3 00:23:51.500 10:37:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:51.500 10:37:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:51.500 10:37:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:51.758 [2024-07-12 10:37:45.455184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:51.758 [2024-07-12 10:37:45.457102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.758 [2024-07-12 10:37:45.457251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.758 [2024-07-12 10:37:45.457565] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:51.758 [2024-07-12 10:37:45.457741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:51.758 [2024-07-12 10:37:45.458008] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:51.758 [2024-07-12 10:37:45.462307] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:51.758 [2024-07-12 10:37:45.462406] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:51.758 [2024-07-12 10:37:45.462645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.758 "name": "raid_bdev1", 00:23:51.758 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:23:51.758 "strip_size_kb": 64, 00:23:51.758 "state": "online", 00:23:51.758 "raid_level": "raid5f", 00:23:51.758 "superblock": true, 00:23:51.758 "num_base_bdevs": 3, 00:23:51.758 "num_base_bdevs_discovered": 3, 00:23:51.758 "num_base_bdevs_operational": 3, 00:23:51.758 "base_bdevs_list": [ 00:23:51.758 { 00:23:51.758 "name": "pt1", 00:23:51.758 "uuid": "2b31b101-22f0-5f70-bcf0-4c8462094068", 00:23:51.758 "is_configured": true, 00:23:51.758 "data_offset": 2048, 00:23:51.758 "data_size": 63488 00:23:51.758 }, 00:23:51.758 { 00:23:51.758 "name": "pt2", 00:23:51.758 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:23:51.758 "is_configured": true, 00:23:51.758 "data_offset": 2048, 00:23:51.758 "data_size": 63488 00:23:51.758 }, 00:23:51.758 { 00:23:51.758 "name": "pt3", 00:23:51.758 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:23:51.758 "is_configured": true, 00:23:51.758 "data_offset": 2048, 00:23:51.758 "data_size": 63488 00:23:51.758 } 00:23:51.758 ] 00:23:51.758 }' 00:23:51.758 10:37:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.758 10:37:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.690 10:37:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:52.690 10:37:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:52.690 [2024-07-12 10:37:46.476066] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.690 10:37:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc 00:23:52.690 10:37:46 -- bdev/bdev_raid.sh@380 -- # '[' -z 2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc ']' 00:23:52.690 10:37:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:52.948 [2024-07-12 10:37:46.663977] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.948 [2024-07-12 10:37:46.664102] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.948 [2024-07-12 10:37:46.664285] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.949 [2024-07-12 10:37:46.664474] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.949 [2024-07-12 10:37:46.664586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:52.949 10:37:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.949 10:37:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:53.207 10:37:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:53.207 10:37:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:53.207 10:37:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:53.207 10:37:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:53.465 10:37:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:53.465 10:37:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:53.722 10:37:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:53.722 10:37:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:53.980 10:37:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:53.980 10:37:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:53.980 10:37:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:53.980 10:37:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:53.980 10:37:47 -- common/autotest_common.sh@640 -- # local es=0 00:23:53.980 10:37:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:53.980 10:37:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.980 10:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:53.980 10:37:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.980 10:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:53.980 10:37:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.980 10:37:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:53.980 10:37:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.980 10:37:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:53.980 10:37:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:54.238 [2024-07-12 10:37:48.056156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:54.238 [2024-07-12 10:37:48.058158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:54.238 [2024-07-12 10:37:48.058316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:54.238 [2024-07-12 10:37:48.058400] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:54.238 [2024-07-12 10:37:48.058559] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:54.238 [2024-07-12 10:37:48.058626] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:54.238 [2024-07-12 10:37:48.058808] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:54.238 [2024-07-12 10:37:48.058912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:54.238 request: 00:23:54.238 { 00:23:54.238 "name": "raid_bdev1", 00:23:54.238 "raid_level": "raid5f", 00:23:54.238 "base_bdevs": [ 00:23:54.238 "malloc1", 00:23:54.238 "malloc2", 00:23:54.238 "malloc3" 00:23:54.238 ], 00:23:54.238 "superblock": false, 00:23:54.238 "strip_size_kb": 64, 00:23:54.238 "method": "bdev_raid_create", 00:23:54.238 "req_id": 1 00:23:54.238 } 00:23:54.238 Got JSON-RPC error response 00:23:54.238 response: 00:23:54.238 { 00:23:54.238 "code": -17, 00:23:54.238 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:54.238 } 00:23:54.238 10:37:48 -- common/autotest_common.sh@643 -- # es=1 00:23:54.238 10:37:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:54.238 10:37:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:54.238 10:37:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:54.238 10:37:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.238 10:37:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:54.496 10:37:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:54.496 10:37:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:54.496 10:37:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:54.754 [2024-07-12 10:37:48.416154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:54.754 [2024-07-12 10:37:48.416324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.754 [2024-07-12 10:37:48.416388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:54.754 [2024-07-12 10:37:48.416523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.754 [2024-07-12 10:37:48.418777] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.754 [2024-07-12 10:37:48.418935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:54.754 [2024-07-12 10:37:48.419115] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:54.754 [2024-07-12 10:37:48.419254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:54.754 pt1 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.754 "name": "raid_bdev1", 00:23:54.754 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:23:54.754 "strip_size_kb": 64, 00:23:54.754 "state": "configuring", 00:23:54.754 "raid_level": "raid5f", 00:23:54.754 "superblock": true, 00:23:54.754 "num_base_bdevs": 3, 00:23:54.754 "num_base_bdevs_discovered": 1, 00:23:54.754 "num_base_bdevs_operational": 3, 00:23:54.754 "base_bdevs_list": [ 00:23:54.754 { 00:23:54.754 "name": "pt1", 00:23:54.754 "uuid": "2b31b101-22f0-5f70-bcf0-4c8462094068", 00:23:54.754 "is_configured": true, 00:23:54.754 "data_offset": 2048, 00:23:54.754 "data_size": 63488 00:23:54.754 }, 00:23:54.754 { 00:23:54.754 "name": null, 00:23:54.754 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:23:54.754 "is_configured": false, 00:23:54.754 "data_offset": 2048, 00:23:54.754 "data_size": 63488 00:23:54.754 }, 00:23:54.754 { 00:23:54.754 "name": null, 00:23:54.754 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:23:54.754 "is_configured": false, 00:23:54.754 "data_offset": 2048, 00:23:54.754 "data_size": 63488 00:23:54.754 } 00:23:54.754 ] 00:23:54.754 }' 00:23:54.754 10:37:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.754 10:37:48 -- common/autotest_common.sh@10 -- # set +x 00:23:55.687 10:37:49 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:55.687 10:37:49 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:55.687 [2024-07-12 10:37:49.464355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:55.687 [2024-07-12 10:37:49.464547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.687 [2024-07-12 10:37:49.464620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:55.687 [2024-07-12 10:37:49.464755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.687 [2024-07-12 10:37:49.465150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.687 [2024-07-12 10:37:49.465323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:55.688 [2024-07-12 10:37:49.465510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:55.688 [2024-07-12 10:37:49.465627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:55.688 pt2 00:23:55.688 10:37:49 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:55.945 [2024-07-12 10:37:49.648406] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.945 10:37:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.203 10:37:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:56.203 "name": "raid_bdev1", 00:23:56.203 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:23:56.203 "strip_size_kb": 64, 00:23:56.203 "state": "configuring", 00:23:56.203 "raid_level": "raid5f", 00:23:56.203 "superblock": true, 00:23:56.203 "num_base_bdevs": 3, 00:23:56.203 "num_base_bdevs_discovered": 1, 00:23:56.203 "num_base_bdevs_operational": 3, 00:23:56.203 "base_bdevs_list": [ 00:23:56.203 { 00:23:56.203 "name": "pt1", 00:23:56.203 "uuid": "2b31b101-22f0-5f70-bcf0-4c8462094068", 00:23:56.203 "is_configured": true, 00:23:56.203 "data_offset": 2048, 00:23:56.203 "data_size": 63488 00:23:56.203 }, 00:23:56.203 { 00:23:56.203 "name": null, 00:23:56.203 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:23:56.203 "is_configured": false, 00:23:56.203 "data_offset": 2048, 00:23:56.203 "data_size": 63488 00:23:56.203 }, 00:23:56.203 { 00:23:56.203 "name": null, 00:23:56.203 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:23:56.203 "is_configured": false, 00:23:56.203 "data_offset": 2048, 00:23:56.203 "data_size": 63488 00:23:56.203 } 00:23:56.203 ] 00:23:56.203 }' 00:23:56.203 10:37:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:56.203 10:37:49 -- common/autotest_common.sh@10 -- # set +x 00:23:56.769 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:56.769 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:56.769 10:37:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.027 [2024-07-12 10:37:50.776563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.028 [2024-07-12 10:37:50.776736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.028 [2024-07-12 10:37:50.776795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:57.028 [2024-07-12 10:37:50.776900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.028 [2024-07-12 10:37:50.777410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.028 [2024-07-12 10:37:50.777542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.028 [2024-07-12 10:37:50.777714] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:57.028 [2024-07-12 10:37:50.777825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:57.028 pt2 00:23:57.028 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:57.028 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:57.028 10:37:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:57.286 [2024-07-12 10:37:50.948612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:57.286 [2024-07-12 10:37:50.948780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.286 [2024-07-12 10:37:50.948840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:57.286 [2024-07-12 10:37:50.948941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.286 [2024-07-12 10:37:50.949393] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.286 [2024-07-12 10:37:50.949544] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:57.286 [2024-07-12 10:37:50.949770] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:57.286 [2024-07-12 10:37:50.949906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:57.286 [2024-07-12 10:37:50.950049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:57.286 [2024-07-12 10:37:50.950132] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:57.286 [2024-07-12 10:37:50.950273] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:57.286 [2024-07-12 10:37:50.954223] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:57.286 [2024-07-12 10:37:50.954328] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:57.286 [2024-07-12 10:37:50.954584] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.286 pt3 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.286 10:37:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.286 10:37:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.286 "name": "raid_bdev1", 00:23:57.286 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:23:57.286 "strip_size_kb": 64, 00:23:57.286 "state": "online", 00:23:57.286 "raid_level": "raid5f", 00:23:57.286 "superblock": true, 00:23:57.286 "num_base_bdevs": 3, 00:23:57.286 "num_base_bdevs_discovered": 3, 00:23:57.286 "num_base_bdevs_operational": 3, 00:23:57.286 "base_bdevs_list": [ 00:23:57.286 { 00:23:57.286 "name": "pt1", 00:23:57.286 "uuid": "2b31b101-22f0-5f70-bcf0-4c8462094068", 00:23:57.286 "is_configured": true, 00:23:57.286 "data_offset": 2048, 00:23:57.286 "data_size": 63488 00:23:57.286 }, 00:23:57.286 { 00:23:57.286 "name": "pt2", 00:23:57.286 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:23:57.286 "is_configured": true, 00:23:57.286 "data_offset": 2048, 00:23:57.286 "data_size": 63488 00:23:57.286 }, 00:23:57.286 { 00:23:57.286 "name": "pt3", 00:23:57.286 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:23:57.286 "is_configured": true, 00:23:57.286 "data_offset": 2048, 00:23:57.286 "data_size": 63488 00:23:57.286 } 00:23:57.286 ] 00:23:57.286 }' 00:23:57.286 10:37:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.286 10:37:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.220 10:37:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:58.220 10:37:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:58.220 [2024-07-12 10:37:51.995297] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.220 10:37:52 -- bdev/bdev_raid.sh@430 -- # '[' 2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc '!=' 2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc ']' 00:23:58.220 10:37:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:58.220 10:37:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:58.220 10:37:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:58.220 10:37:52 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:58.478 [2024-07-12 10:37:52.183219] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:58.478 "name": "raid_bdev1", 00:23:58.478 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:23:58.478 "strip_size_kb": 64, 00:23:58.478 "state": "online", 00:23:58.478 "raid_level": "raid5f", 00:23:58.478 "superblock": true, 00:23:58.478 "num_base_bdevs": 3, 00:23:58.478 "num_base_bdevs_discovered": 2, 00:23:58.478 "num_base_bdevs_operational": 2, 00:23:58.478 "base_bdevs_list": [ 00:23:58.478 { 00:23:58.478 "name": null, 00:23:58.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.478 "is_configured": false, 00:23:58.478 "data_offset": 2048, 00:23:58.478 "data_size": 63488 00:23:58.478 }, 00:23:58.478 { 00:23:58.478 "name": "pt2", 00:23:58.478 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:23:58.478 "is_configured": true, 00:23:58.478 "data_offset": 2048, 00:23:58.478 "data_size": 63488 00:23:58.478 }, 00:23:58.478 { 00:23:58.478 "name": "pt3", 00:23:58.478 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:23:58.478 "is_configured": true, 00:23:58.478 "data_offset": 2048, 00:23:58.478 "data_size": 63488 00:23:58.478 } 00:23:58.478 ] 00:23:58.478 }' 00:23:58.478 10:37:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:58.478 10:37:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.411 10:37:53 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:59.411 [2024-07-12 10:37:53.299532] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.411 [2024-07-12 10:37:53.299692] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.411 [2024-07-12 10:37:53.299853] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.411 [2024-07-12 10:37:53.300018] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.411 [2024-07-12 10:37:53.300111] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:59.411 10:37:53 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:59.411 10:37:53 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.667 10:37:53 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:59.667 10:37:53 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:59.667 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:59.667 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:59.667 10:37:53 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:59.924 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:59.924 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:59.924 10:37:53 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:00.182 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:00.182 10:37:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:00.182 10:37:53 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:00.182 10:37:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:00.182 10:37:53 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:00.439 [2024-07-12 10:37:54.135631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:00.439 [2024-07-12 10:37:54.135856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.439 [2024-07-12 10:37:54.135928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:00.439 [2024-07-12 10:37:54.136080] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.439 [2024-07-12 10:37:54.138299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.439 [2024-07-12 10:37:54.138463] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:00.439 [2024-07-12 10:37:54.138697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:00.439 [2024-07-12 10:37:54.138847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:00.439 pt2 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.439 "name": "raid_bdev1", 00:24:00.439 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:24:00.439 "strip_size_kb": 64, 00:24:00.439 "state": "configuring", 00:24:00.439 "raid_level": "raid5f", 00:24:00.439 "superblock": true, 00:24:00.439 "num_base_bdevs": 3, 00:24:00.439 "num_base_bdevs_discovered": 1, 00:24:00.439 "num_base_bdevs_operational": 2, 00:24:00.439 "base_bdevs_list": [ 00:24:00.439 { 00:24:00.439 "name": null, 00:24:00.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.439 "is_configured": false, 00:24:00.439 "data_offset": 2048, 00:24:00.439 "data_size": 63488 00:24:00.439 }, 00:24:00.439 { 00:24:00.439 "name": "pt2", 00:24:00.439 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:24:00.439 "is_configured": true, 00:24:00.439 "data_offset": 2048, 00:24:00.439 "data_size": 63488 00:24:00.439 }, 00:24:00.439 { 00:24:00.439 "name": null, 00:24:00.439 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:24:00.439 "is_configured": false, 00:24:00.439 "data_offset": 2048, 00:24:00.439 "data_size": 63488 00:24:00.439 } 00:24:00.439 ] 00:24:00.439 }' 00:24:00.439 10:37:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.439 10:37:54 -- common/autotest_common.sh@10 -- # set +x 00:24:01.371 10:37:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:01.371 10:37:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:01.371 10:37:55 -- bdev/bdev_raid.sh@462 -- # i=2 00:24:01.371 10:37:55 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:01.371 [2024-07-12 10:37:55.271841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:01.371 [2024-07-12 10:37:55.273475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.371 [2024-07-12 10:37:55.273808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:01.371 [2024-07-12 10:37:55.274071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.371 [2024-07-12 10:37:55.275296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.371 [2024-07-12 10:37:55.275633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:01.371 [2024-07-12 10:37:55.276127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:01.371 [2024-07-12 10:37:55.276394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:01.371 [2024-07-12 10:37:55.276971] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:24:01.371 pt3 00:24:01.371 [2024-07-12 10:37:55.277758] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:01.372 [2024-07-12 10:37:55.278161] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:01.630 [2024-07-12 10:37:55.290673] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:24:01.630 [2024-07-12 10:37:55.290973] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:24:01.630 [2024-07-12 10:37:55.291728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.630 "name": "raid_bdev1", 00:24:01.630 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:24:01.630 "strip_size_kb": 64, 00:24:01.630 "state": "online", 00:24:01.630 "raid_level": "raid5f", 00:24:01.630 "superblock": true, 00:24:01.630 "num_base_bdevs": 3, 00:24:01.630 "num_base_bdevs_discovered": 2, 00:24:01.630 "num_base_bdevs_operational": 2, 00:24:01.630 "base_bdevs_list": [ 00:24:01.630 { 00:24:01.630 "name": null, 00:24:01.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.630 "is_configured": false, 00:24:01.630 "data_offset": 2048, 00:24:01.630 "data_size": 63488 00:24:01.630 }, 00:24:01.630 { 00:24:01.630 "name": "pt2", 00:24:01.630 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:24:01.630 "is_configured": true, 00:24:01.630 "data_offset": 2048, 00:24:01.630 "data_size": 63488 00:24:01.630 }, 00:24:01.630 { 00:24:01.630 "name": "pt3", 00:24:01.630 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:24:01.630 "is_configured": true, 00:24:01.630 "data_offset": 2048, 00:24:01.630 "data_size": 63488 00:24:01.630 } 00:24:01.630 ] 00:24:01.630 }' 00:24:01.630 10:37:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.630 10:37:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.564 10:37:56 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:24:02.564 10:37:56 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:02.564 [2024-07-12 10:37:56.355566] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.564 [2024-07-12 10:37:56.355723] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.564 [2024-07-12 10:37:56.355882] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.564 [2024-07-12 10:37:56.355970] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.564 [2024-07-12 10:37:56.356151] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:24:02.564 10:37:56 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.564 10:37:56 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:02.823 10:37:56 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:02.823 10:37:56 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:02.823 10:37:56 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:03.082 [2024-07-12 10:37:56.791633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:03.082 [2024-07-12 10:37:56.791803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.082 [2024-07-12 10:37:56.791871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:03.082 [2024-07-12 10:37:56.791982] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.082 [2024-07-12 10:37:56.794099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.082 [2024-07-12 10:37:56.794254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:03.082 [2024-07-12 10:37:56.794452] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:03.082 [2024-07-12 10:37:56.794605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:03.082 pt1 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.082 10:37:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.340 10:37:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.340 "name": "raid_bdev1", 00:24:03.340 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:24:03.340 "strip_size_kb": 64, 00:24:03.340 "state": "configuring", 00:24:03.340 "raid_level": "raid5f", 00:24:03.340 "superblock": true, 00:24:03.340 "num_base_bdevs": 3, 00:24:03.340 "num_base_bdevs_discovered": 1, 00:24:03.340 "num_base_bdevs_operational": 3, 00:24:03.340 "base_bdevs_list": [ 00:24:03.340 { 00:24:03.340 "name": "pt1", 00:24:03.340 "uuid": "2b31b101-22f0-5f70-bcf0-4c8462094068", 00:24:03.340 "is_configured": true, 00:24:03.340 "data_offset": 2048, 00:24:03.340 "data_size": 63488 00:24:03.340 }, 00:24:03.340 { 00:24:03.340 "name": null, 00:24:03.340 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:24:03.340 "is_configured": false, 00:24:03.340 "data_offset": 2048, 00:24:03.340 "data_size": 63488 00:24:03.340 }, 00:24:03.340 { 00:24:03.340 "name": null, 00:24:03.340 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:24:03.340 "is_configured": false, 00:24:03.340 "data_offset": 2048, 00:24:03.340 "data_size": 63488 00:24:03.340 } 00:24:03.340 ] 00:24:03.340 }' 00:24:03.340 10:37:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.340 10:37:57 -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 10:37:57 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:03.906 10:37:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:03.906 10:37:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:04.165 10:37:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:04.165 10:37:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:04.165 10:37:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@489 -- # i=2 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:04.424 [2024-07-12 10:37:58.275979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:04.424 [2024-07-12 10:37:58.276186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.424 [2024-07-12 10:37:58.276255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:04.424 [2024-07-12 10:37:58.276383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.424 [2024-07-12 10:37:58.276905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.424 [2024-07-12 10:37:58.277058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:04.424 [2024-07-12 10:37:58.277268] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:04.424 [2024-07-12 10:37:58.277409] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:04.424 [2024-07-12 10:37:58.277499] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.424 [2024-07-12 10:37:58.277547] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:24:04.424 [2024-07-12 10:37:58.277823] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:04.424 pt3 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.424 10:37:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.683 10:37:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.683 "name": "raid_bdev1", 00:24:04.683 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:24:04.683 "strip_size_kb": 64, 00:24:04.683 "state": "configuring", 00:24:04.683 "raid_level": "raid5f", 00:24:04.683 "superblock": true, 00:24:04.683 "num_base_bdevs": 3, 00:24:04.683 "num_base_bdevs_discovered": 1, 00:24:04.683 "num_base_bdevs_operational": 2, 00:24:04.683 "base_bdevs_list": [ 00:24:04.683 { 00:24:04.683 "name": null, 00:24:04.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.683 "is_configured": false, 00:24:04.683 "data_offset": 2048, 00:24:04.683 "data_size": 63488 00:24:04.683 }, 00:24:04.683 { 00:24:04.683 "name": null, 00:24:04.683 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:24:04.683 "is_configured": false, 00:24:04.683 "data_offset": 2048, 00:24:04.683 "data_size": 63488 00:24:04.683 }, 00:24:04.683 { 00:24:04.683 "name": "pt3", 00:24:04.683 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:24:04.683 "is_configured": true, 00:24:04.683 "data_offset": 2048, 00:24:04.683 "data_size": 63488 00:24:04.683 } 00:24:04.683 ] 00:24:04.683 }' 00:24:04.683 10:37:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.683 10:37:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:05.618 [2024-07-12 10:37:59.336152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:05.618 [2024-07-12 10:37:59.336338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.618 [2024-07-12 10:37:59.336401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:05.618 [2024-07-12 10:37:59.336514] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.618 [2024-07-12 10:37:59.337010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.618 [2024-07-12 10:37:59.337162] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:05.618 [2024-07-12 10:37:59.337490] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:05.618 [2024-07-12 10:37:59.337667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:05.618 [2024-07-12 10:37:59.337851] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:24:05.618 [2024-07-12 10:37:59.337980] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:05.618 [2024-07-12 10:37:59.338146] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:05.618 [2024-07-12 10:37:59.342629] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:24:05.618 [2024-07-12 10:37:59.342750] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:24:05.618 [2024-07-12 10:37:59.343046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.618 pt2 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.618 10:37:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.876 10:37:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.876 "name": "raid_bdev1", 00:24:05.876 "uuid": "2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc", 00:24:05.876 "strip_size_kb": 64, 00:24:05.876 "state": "online", 00:24:05.876 "raid_level": "raid5f", 00:24:05.876 "superblock": true, 00:24:05.876 "num_base_bdevs": 3, 00:24:05.876 "num_base_bdevs_discovered": 2, 00:24:05.876 "num_base_bdevs_operational": 2, 00:24:05.876 "base_bdevs_list": [ 00:24:05.876 { 00:24:05.876 "name": null, 00:24:05.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.876 "is_configured": false, 00:24:05.876 "data_offset": 2048, 00:24:05.876 "data_size": 63488 00:24:05.876 }, 00:24:05.876 { 00:24:05.876 "name": "pt2", 00:24:05.876 "uuid": "2383167a-1a7b-5ea3-ab33-64ebf1ee667c", 00:24:05.876 "is_configured": true, 00:24:05.876 "data_offset": 2048, 00:24:05.876 "data_size": 63488 00:24:05.876 }, 00:24:05.876 { 00:24:05.876 "name": "pt3", 00:24:05.876 "uuid": "50089420-6aaf-5147-921c-d26538634f62", 00:24:05.876 "is_configured": true, 00:24:05.876 "data_offset": 2048, 00:24:05.876 "data_size": 63488 00:24:05.876 } 00:24:05.876 ] 00:24:05.876 }' 00:24:05.876 10:37:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.876 10:37:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.442 10:38:00 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.443 10:38:00 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:06.700 [2024-07-12 10:38:00.548188] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.700 10:38:00 -- bdev/bdev_raid.sh@506 -- # '[' 2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc '!=' 2c475ac7-7c75-4cbf-b09f-db7d4e97d5fc ']' 00:24:06.700 10:38:00 -- bdev/bdev_raid.sh@511 -- # killprocess 131415 00:24:06.700 10:38:00 -- common/autotest_common.sh@926 -- # '[' -z 131415 ']' 00:24:06.700 10:38:00 -- common/autotest_common.sh@930 -- # kill -0 131415 00:24:06.700 10:38:00 -- common/autotest_common.sh@931 -- # uname 00:24:06.700 10:38:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:06.700 10:38:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131415 00:24:06.700 killing process with pid 131415 00:24:06.700 10:38:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:06.700 10:38:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:06.700 10:38:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131415' 00:24:06.700 10:38:00 -- common/autotest_common.sh@945 -- # kill 131415 00:24:06.700 10:38:00 -- common/autotest_common.sh@950 -- # wait 131415 00:24:06.700 [2024-07-12 10:38:00.582833] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:06.700 [2024-07-12 10:38:00.582896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.700 [2024-07-12 10:38:00.582949] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.700 [2024-07-12 10:38:00.582958] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:06.958 [2024-07-12 10:38:00.772383] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:07.892 ************************************ 00:24:07.892 END TEST raid5f_superblock_test 00:24:07.892 ************************************ 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:07.892 00:24:07.892 real 0m18.774s 00:24:07.892 user 0m34.750s 00:24:07.892 sys 0m2.136s 00:24:07.892 10:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.892 10:38:01 -- common/autotest_common.sh@10 -- # set +x 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:24:07.892 10:38:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:07.892 10:38:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:07.892 10:38:01 -- common/autotest_common.sh@10 -- # set +x 00:24:07.892 ************************************ 00:24:07.892 START TEST raid5f_rebuild_test 00:24:07.892 ************************************ 00:24:07.892 10:38:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=132050 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:07.892 10:38:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132050 /var/tmp/spdk-raid.sock 00:24:07.892 10:38:01 -- common/autotest_common.sh@819 -- # '[' -z 132050 ']' 00:24:07.892 10:38:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:07.892 10:38:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:07.892 10:38:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:07.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:07.892 10:38:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:07.892 10:38:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.151 [2024-07-12 10:38:01.811065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:08.151 [2024-07-12 10:38:01.811419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132050 ] 00:24:08.151 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:08.151 Zero copy mechanism will not be used. 00:24:08.151 [2024-07-12 10:38:01.966516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.409 [2024-07-12 10:38:02.184122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.667 [2024-07-12 10:38:02.372478] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.925 10:38:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:08.925 10:38:02 -- common/autotest_common.sh@852 -- # return 0 00:24:08.925 10:38:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:08.925 10:38:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:08.925 10:38:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:09.184 BaseBdev1 00:24:09.184 10:38:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:09.184 10:38:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:09.184 10:38:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:09.443 BaseBdev2 00:24:09.443 10:38:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:09.443 10:38:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:09.443 10:38:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:09.701 BaseBdev3 00:24:09.701 10:38:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:09.959 spare_malloc 00:24:09.959 10:38:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:10.216 spare_delay 00:24:10.216 10:38:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:10.475 [2024-07-12 10:38:04.147677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:10.475 [2024-07-12 10:38:04.148301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.475 [2024-07-12 10:38:04.148550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:10.475 [2024-07-12 10:38:04.148791] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.475 [2024-07-12 10:38:04.151312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.475 [2024-07-12 10:38:04.151577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:10.475 spare 00:24:10.475 10:38:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:10.475 [2024-07-12 10:38:04.380121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:10.475 [2024-07-12 10:38:04.382181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:10.475 [2024-07-12 10:38:04.382346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:10.475 [2024-07-12 10:38:04.382464] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:24:10.475 [2024-07-12 10:38:04.382595] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:10.475 [2024-07-12 10:38:04.382832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:10.475 [2024-07-12 10:38:04.387735] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:24:10.475 [2024-07-12 10:38:04.387882] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:24:10.475 [2024-07-12 10:38:04.388150] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.733 10:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:10.733 "name": "raid_bdev1", 00:24:10.733 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:10.733 "strip_size_kb": 64, 00:24:10.733 "state": "online", 00:24:10.733 "raid_level": "raid5f", 00:24:10.733 "superblock": false, 00:24:10.733 "num_base_bdevs": 3, 00:24:10.734 "num_base_bdevs_discovered": 3, 00:24:10.734 "num_base_bdevs_operational": 3, 00:24:10.734 "base_bdevs_list": [ 00:24:10.734 { 00:24:10.734 "name": "BaseBdev1", 00:24:10.734 "uuid": "31d4538d-42c6-4d2d-9090-361db33e707a", 00:24:10.734 "is_configured": true, 00:24:10.734 "data_offset": 0, 00:24:10.734 "data_size": 65536 00:24:10.734 }, 00:24:10.734 { 00:24:10.734 "name": "BaseBdev2", 00:24:10.734 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:10.734 "is_configured": true, 00:24:10.734 "data_offset": 0, 00:24:10.734 "data_size": 65536 00:24:10.734 }, 00:24:10.734 { 00:24:10.734 "name": "BaseBdev3", 00:24:10.734 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:10.734 "is_configured": true, 00:24:10.734 "data_offset": 0, 00:24:10.734 "data_size": 65536 00:24:10.734 } 00:24:10.734 ] 00:24:10.734 }' 00:24:10.734 10:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:10.734 10:38:04 -- common/autotest_common.sh@10 -- # set +x 00:24:11.746 10:38:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:11.746 10:38:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:11.746 [2024-07-12 10:38:05.494035] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:11.746 10:38:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:24:11.746 10:38:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.746 10:38:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.008 10:38:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:12.008 10:38:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:12.008 10:38:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:12.008 10:38:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@12 -- # local i 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.008 10:38:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:12.266 [2024-07-12 10:38:05.962044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:12.266 /dev/nbd0 00:24:12.266 10:38:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.266 10:38:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.266 10:38:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:12.266 10:38:05 -- common/autotest_common.sh@857 -- # local i 00:24:12.266 10:38:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:12.267 10:38:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:12.267 10:38:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:12.267 10:38:05 -- common/autotest_common.sh@861 -- # break 00:24:12.267 10:38:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:12.267 10:38:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:12.267 10:38:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.267 1+0 records in 00:24:12.267 1+0 records out 00:24:12.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515256 s, 7.9 MB/s 00:24:12.267 10:38:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.267 10:38:06 -- common/autotest_common.sh@874 -- # size=4096 00:24:12.267 10:38:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.267 10:38:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:12.267 10:38:06 -- common/autotest_common.sh@877 -- # return 0 00:24:12.267 10:38:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.267 10:38:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.267 10:38:06 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:12.267 10:38:06 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:12.267 10:38:06 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:12.267 10:38:06 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:24:12.526 512+0 records in 00:24:12.526 512+0 records out 00:24:12.526 67108864 bytes (67 MB, 64 MiB) copied, 0.374919 s, 179 MB/s 00:24:12.526 10:38:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@51 -- # local i 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.526 10:38:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:12.786 [2024-07-12 10:38:06.589057] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@41 -- # break 00:24:12.786 10:38:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.786 10:38:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:13.045 [2024-07-12 10:38:06.926760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.045 10:38:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.304 10:38:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.304 "name": "raid_bdev1", 00:24:13.304 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:13.304 "strip_size_kb": 64, 00:24:13.304 "state": "online", 00:24:13.304 "raid_level": "raid5f", 00:24:13.304 "superblock": false, 00:24:13.304 "num_base_bdevs": 3, 00:24:13.304 "num_base_bdevs_discovered": 2, 00:24:13.304 "num_base_bdevs_operational": 2, 00:24:13.304 "base_bdevs_list": [ 00:24:13.304 { 00:24:13.304 "name": null, 00:24:13.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.304 "is_configured": false, 00:24:13.304 "data_offset": 0, 00:24:13.304 "data_size": 65536 00:24:13.304 }, 00:24:13.304 { 00:24:13.304 "name": "BaseBdev2", 00:24:13.304 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:13.304 "is_configured": true, 00:24:13.304 "data_offset": 0, 00:24:13.304 "data_size": 65536 00:24:13.304 }, 00:24:13.304 { 00:24:13.304 "name": "BaseBdev3", 00:24:13.304 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:13.304 "is_configured": true, 00:24:13.304 "data_offset": 0, 00:24:13.304 "data_size": 65536 00:24:13.304 } 00:24:13.304 ] 00:24:13.304 }' 00:24:13.304 10:38:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.304 10:38:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.241 10:38:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:14.241 [2024-07-12 10:38:07.958944] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:14.241 [2024-07-12 10:38:07.959100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.241 [2024-07-12 10:38:07.970384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:24:14.241 [2024-07-12 10:38:07.976235] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:14.241 10:38:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.177 10:38:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.435 10:38:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.435 "name": "raid_bdev1", 00:24:15.435 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:15.436 "strip_size_kb": 64, 00:24:15.436 "state": "online", 00:24:15.436 "raid_level": "raid5f", 00:24:15.436 "superblock": false, 00:24:15.436 "num_base_bdevs": 3, 00:24:15.436 "num_base_bdevs_discovered": 3, 00:24:15.436 "num_base_bdevs_operational": 3, 00:24:15.436 "process": { 00:24:15.436 "type": "rebuild", 00:24:15.436 "target": "spare", 00:24:15.436 "progress": { 00:24:15.436 "blocks": 22528, 00:24:15.436 "percent": 17 00:24:15.436 } 00:24:15.436 }, 00:24:15.436 "base_bdevs_list": [ 00:24:15.436 { 00:24:15.436 "name": "spare", 00:24:15.436 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:15.436 "is_configured": true, 00:24:15.436 "data_offset": 0, 00:24:15.436 "data_size": 65536 00:24:15.436 }, 00:24:15.436 { 00:24:15.436 "name": "BaseBdev2", 00:24:15.436 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:15.436 "is_configured": true, 00:24:15.436 "data_offset": 0, 00:24:15.436 "data_size": 65536 00:24:15.436 }, 00:24:15.436 { 00:24:15.436 "name": "BaseBdev3", 00:24:15.436 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:15.436 "is_configured": true, 00:24:15.436 "data_offset": 0, 00:24:15.436 "data_size": 65536 00:24:15.436 } 00:24:15.436 ] 00:24:15.436 }' 00:24:15.436 10:38:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.436 10:38:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.436 10:38:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.436 10:38:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.436 10:38:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:15.693 [2024-07-12 10:38:09.505652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.693 [2024-07-12 10:38:09.589469] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:15.694 [2024-07-12 10:38:09.589665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.952 "name": "raid_bdev1", 00:24:15.952 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:15.952 "strip_size_kb": 64, 00:24:15.952 "state": "online", 00:24:15.952 "raid_level": "raid5f", 00:24:15.952 "superblock": false, 00:24:15.952 "num_base_bdevs": 3, 00:24:15.952 "num_base_bdevs_discovered": 2, 00:24:15.952 "num_base_bdevs_operational": 2, 00:24:15.952 "base_bdevs_list": [ 00:24:15.952 { 00:24:15.952 "name": null, 00:24:15.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.952 "is_configured": false, 00:24:15.952 "data_offset": 0, 00:24:15.952 "data_size": 65536 00:24:15.952 }, 00:24:15.952 { 00:24:15.952 "name": "BaseBdev2", 00:24:15.952 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:15.952 "is_configured": true, 00:24:15.952 "data_offset": 0, 00:24:15.952 "data_size": 65536 00:24:15.952 }, 00:24:15.952 { 00:24:15.952 "name": "BaseBdev3", 00:24:15.952 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:15.952 "is_configured": true, 00:24:15.952 "data_offset": 0, 00:24:15.952 "data_size": 65536 00:24:15.952 } 00:24:15.952 ] 00:24:15.952 }' 00:24:15.952 10:38:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.952 10:38:09 -- common/autotest_common.sh@10 -- # set +x 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.888 "name": "raid_bdev1", 00:24:16.888 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:16.888 "strip_size_kb": 64, 00:24:16.888 "state": "online", 00:24:16.888 "raid_level": "raid5f", 00:24:16.888 "superblock": false, 00:24:16.888 "num_base_bdevs": 3, 00:24:16.888 "num_base_bdevs_discovered": 2, 00:24:16.888 "num_base_bdevs_operational": 2, 00:24:16.888 "base_bdevs_list": [ 00:24:16.888 { 00:24:16.888 "name": null, 00:24:16.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.888 "is_configured": false, 00:24:16.888 "data_offset": 0, 00:24:16.888 "data_size": 65536 00:24:16.888 }, 00:24:16.888 { 00:24:16.888 "name": "BaseBdev2", 00:24:16.888 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:16.888 "is_configured": true, 00:24:16.888 "data_offset": 0, 00:24:16.888 "data_size": 65536 00:24:16.888 }, 00:24:16.888 { 00:24:16.888 "name": "BaseBdev3", 00:24:16.888 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:16.888 "is_configured": true, 00:24:16.888 "data_offset": 0, 00:24:16.888 "data_size": 65536 00:24:16.888 } 00:24:16.888 ] 00:24:16.888 }' 00:24:16.888 10:38:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.146 10:38:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.146 10:38:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.146 10:38:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:17.146 10:38:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:17.403 [2024-07-12 10:38:11.103500] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:17.403 [2024-07-12 10:38:11.103653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.403 [2024-07-12 10:38:11.113182] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:24:17.403 [2024-07-12 10:38:11.118843] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.403 10:38:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.336 10:38:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.594 "name": "raid_bdev1", 00:24:18.594 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:18.594 "strip_size_kb": 64, 00:24:18.594 "state": "online", 00:24:18.594 "raid_level": "raid5f", 00:24:18.594 "superblock": false, 00:24:18.594 "num_base_bdevs": 3, 00:24:18.594 "num_base_bdevs_discovered": 3, 00:24:18.594 "num_base_bdevs_operational": 3, 00:24:18.594 "process": { 00:24:18.594 "type": "rebuild", 00:24:18.594 "target": "spare", 00:24:18.594 "progress": { 00:24:18.594 "blocks": 24576, 00:24:18.594 "percent": 18 00:24:18.594 } 00:24:18.594 }, 00:24:18.594 "base_bdevs_list": [ 00:24:18.594 { 00:24:18.594 "name": "spare", 00:24:18.594 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:18.594 "is_configured": true, 00:24:18.594 "data_offset": 0, 00:24:18.594 "data_size": 65536 00:24:18.594 }, 00:24:18.594 { 00:24:18.594 "name": "BaseBdev2", 00:24:18.594 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:18.594 "is_configured": true, 00:24:18.594 "data_offset": 0, 00:24:18.594 "data_size": 65536 00:24:18.594 }, 00:24:18.594 { 00:24:18.594 "name": "BaseBdev3", 00:24:18.594 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:18.594 "is_configured": true, 00:24:18.594 "data_offset": 0, 00:24:18.594 "data_size": 65536 00:24:18.594 } 00:24:18.594 ] 00:24:18.594 }' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@657 -- # local timeout=605 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.594 10:38:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.853 10:38:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.853 "name": "raid_bdev1", 00:24:18.853 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:18.853 "strip_size_kb": 64, 00:24:18.853 "state": "online", 00:24:18.853 "raid_level": "raid5f", 00:24:18.853 "superblock": false, 00:24:18.853 "num_base_bdevs": 3, 00:24:18.853 "num_base_bdevs_discovered": 3, 00:24:18.853 "num_base_bdevs_operational": 3, 00:24:18.854 "process": { 00:24:18.854 "type": "rebuild", 00:24:18.854 "target": "spare", 00:24:18.854 "progress": { 00:24:18.854 "blocks": 30720, 00:24:18.854 "percent": 23 00:24:18.854 } 00:24:18.854 }, 00:24:18.854 "base_bdevs_list": [ 00:24:18.854 { 00:24:18.854 "name": "spare", 00:24:18.854 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:18.854 "is_configured": true, 00:24:18.854 "data_offset": 0, 00:24:18.854 "data_size": 65536 00:24:18.854 }, 00:24:18.854 { 00:24:18.854 "name": "BaseBdev2", 00:24:18.854 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:18.854 "is_configured": true, 00:24:18.854 "data_offset": 0, 00:24:18.854 "data_size": 65536 00:24:18.854 }, 00:24:18.854 { 00:24:18.854 "name": "BaseBdev3", 00:24:18.854 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:18.854 "is_configured": true, 00:24:18.854 "data_offset": 0, 00:24:18.854 "data_size": 65536 00:24:18.854 } 00:24:18.854 ] 00:24:18.854 }' 00:24:18.854 10:38:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.854 10:38:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.854 10:38:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.112 10:38:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.112 10:38:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.048 "name": "raid_bdev1", 00:24:20.048 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:20.048 "strip_size_kb": 64, 00:24:20.048 "state": "online", 00:24:20.048 "raid_level": "raid5f", 00:24:20.048 "superblock": false, 00:24:20.048 "num_base_bdevs": 3, 00:24:20.048 "num_base_bdevs_discovered": 3, 00:24:20.048 "num_base_bdevs_operational": 3, 00:24:20.048 "process": { 00:24:20.048 "type": "rebuild", 00:24:20.048 "target": "spare", 00:24:20.048 "progress": { 00:24:20.048 "blocks": 57344, 00:24:20.048 "percent": 43 00:24:20.048 } 00:24:20.048 }, 00:24:20.048 "base_bdevs_list": [ 00:24:20.048 { 00:24:20.048 "name": "spare", 00:24:20.048 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:20.048 "is_configured": true, 00:24:20.048 "data_offset": 0, 00:24:20.048 "data_size": 65536 00:24:20.048 }, 00:24:20.048 { 00:24:20.048 "name": "BaseBdev2", 00:24:20.048 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:20.048 "is_configured": true, 00:24:20.048 "data_offset": 0, 00:24:20.048 "data_size": 65536 00:24:20.048 }, 00:24:20.048 { 00:24:20.048 "name": "BaseBdev3", 00:24:20.048 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:20.048 "is_configured": true, 00:24:20.048 "data_offset": 0, 00:24:20.048 "data_size": 65536 00:24:20.048 } 00:24:20.048 ] 00:24:20.048 }' 00:24:20.048 10:38:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.307 10:38:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.307 10:38:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.307 10:38:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.307 10:38:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.242 10:38:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.500 10:38:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:21.500 "name": "raid_bdev1", 00:24:21.500 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:21.500 "strip_size_kb": 64, 00:24:21.500 "state": "online", 00:24:21.500 "raid_level": "raid5f", 00:24:21.500 "superblock": false, 00:24:21.500 "num_base_bdevs": 3, 00:24:21.500 "num_base_bdevs_discovered": 3, 00:24:21.500 "num_base_bdevs_operational": 3, 00:24:21.500 "process": { 00:24:21.500 "type": "rebuild", 00:24:21.500 "target": "spare", 00:24:21.500 "progress": { 00:24:21.500 "blocks": 83968, 00:24:21.500 "percent": 64 00:24:21.500 } 00:24:21.500 }, 00:24:21.500 "base_bdevs_list": [ 00:24:21.500 { 00:24:21.500 "name": "spare", 00:24:21.500 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:21.500 "is_configured": true, 00:24:21.500 "data_offset": 0, 00:24:21.500 "data_size": 65536 00:24:21.500 }, 00:24:21.500 { 00:24:21.500 "name": "BaseBdev2", 00:24:21.500 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:21.500 "is_configured": true, 00:24:21.500 "data_offset": 0, 00:24:21.500 "data_size": 65536 00:24:21.500 }, 00:24:21.500 { 00:24:21.500 "name": "BaseBdev3", 00:24:21.500 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:21.500 "is_configured": true, 00:24:21.500 "data_offset": 0, 00:24:21.500 "data_size": 65536 00:24:21.500 } 00:24:21.500 ] 00:24:21.500 }' 00:24:21.500 10:38:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:21.500 10:38:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.500 10:38:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.758 10:38:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.758 10:38:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.693 10:38:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.952 "name": "raid_bdev1", 00:24:22.952 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:22.952 "strip_size_kb": 64, 00:24:22.952 "state": "online", 00:24:22.952 "raid_level": "raid5f", 00:24:22.952 "superblock": false, 00:24:22.952 "num_base_bdevs": 3, 00:24:22.952 "num_base_bdevs_discovered": 3, 00:24:22.952 "num_base_bdevs_operational": 3, 00:24:22.952 "process": { 00:24:22.952 "type": "rebuild", 00:24:22.952 "target": "spare", 00:24:22.952 "progress": { 00:24:22.952 "blocks": 110592, 00:24:22.952 "percent": 84 00:24:22.952 } 00:24:22.952 }, 00:24:22.952 "base_bdevs_list": [ 00:24:22.952 { 00:24:22.952 "name": "spare", 00:24:22.952 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:22.952 "is_configured": true, 00:24:22.952 "data_offset": 0, 00:24:22.952 "data_size": 65536 00:24:22.952 }, 00:24:22.952 { 00:24:22.952 "name": "BaseBdev2", 00:24:22.952 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:22.952 "is_configured": true, 00:24:22.952 "data_offset": 0, 00:24:22.952 "data_size": 65536 00:24:22.952 }, 00:24:22.952 { 00:24:22.952 "name": "BaseBdev3", 00:24:22.952 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:22.952 "is_configured": true, 00:24:22.952 "data_offset": 0, 00:24:22.952 "data_size": 65536 00:24:22.952 } 00:24:22.952 ] 00:24:22.952 }' 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.952 10:38:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:23.889 [2024-07-12 10:38:17.570553] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:23.889 [2024-07-12 10:38:17.570831] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:23.889 [2024-07-12 10:38:17.571021] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.889 10:38:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.148 10:38:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.148 "name": "raid_bdev1", 00:24:24.148 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:24.148 "strip_size_kb": 64, 00:24:24.148 "state": "online", 00:24:24.148 "raid_level": "raid5f", 00:24:24.148 "superblock": false, 00:24:24.148 "num_base_bdevs": 3, 00:24:24.148 "num_base_bdevs_discovered": 3, 00:24:24.148 "num_base_bdevs_operational": 3, 00:24:24.148 "base_bdevs_list": [ 00:24:24.148 { 00:24:24.148 "name": "spare", 00:24:24.148 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:24.148 "is_configured": true, 00:24:24.148 "data_offset": 0, 00:24:24.148 "data_size": 65536 00:24:24.148 }, 00:24:24.148 { 00:24:24.148 "name": "BaseBdev2", 00:24:24.148 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:24.148 "is_configured": true, 00:24:24.148 "data_offset": 0, 00:24:24.148 "data_size": 65536 00:24:24.148 }, 00:24:24.148 { 00:24:24.148 "name": "BaseBdev3", 00:24:24.148 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:24.148 "is_configured": true, 00:24:24.148 "data_offset": 0, 00:24:24.148 "data_size": 65536 00:24:24.148 } 00:24:24.148 ] 00:24:24.148 }' 00:24:24.148 10:38:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@660 -- # break 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.407 10:38:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.666 "name": "raid_bdev1", 00:24:24.666 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:24.666 "strip_size_kb": 64, 00:24:24.666 "state": "online", 00:24:24.666 "raid_level": "raid5f", 00:24:24.666 "superblock": false, 00:24:24.666 "num_base_bdevs": 3, 00:24:24.666 "num_base_bdevs_discovered": 3, 00:24:24.666 "num_base_bdevs_operational": 3, 00:24:24.666 "base_bdevs_list": [ 00:24:24.666 { 00:24:24.666 "name": "spare", 00:24:24.666 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:24.666 "is_configured": true, 00:24:24.666 "data_offset": 0, 00:24:24.666 "data_size": 65536 00:24:24.666 }, 00:24:24.666 { 00:24:24.666 "name": "BaseBdev2", 00:24:24.666 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:24.666 "is_configured": true, 00:24:24.666 "data_offset": 0, 00:24:24.666 "data_size": 65536 00:24:24.666 }, 00:24:24.666 { 00:24:24.666 "name": "BaseBdev3", 00:24:24.666 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:24.666 "is_configured": true, 00:24:24.666 "data_offset": 0, 00:24:24.666 "data_size": 65536 00:24:24.666 } 00:24:24.666 ] 00:24:24.666 }' 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.666 10:38:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.925 10:38:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.925 "name": "raid_bdev1", 00:24:24.925 "uuid": "95244ea5-1e71-4ee6-9577-16015b200400", 00:24:24.925 "strip_size_kb": 64, 00:24:24.925 "state": "online", 00:24:24.925 "raid_level": "raid5f", 00:24:24.925 "superblock": false, 00:24:24.925 "num_base_bdevs": 3, 00:24:24.925 "num_base_bdevs_discovered": 3, 00:24:24.925 "num_base_bdevs_operational": 3, 00:24:24.925 "base_bdevs_list": [ 00:24:24.925 { 00:24:24.925 "name": "spare", 00:24:24.925 "uuid": "85240dd6-b27f-556e-b86f-731b6ff348a2", 00:24:24.925 "is_configured": true, 00:24:24.925 "data_offset": 0, 00:24:24.925 "data_size": 65536 00:24:24.925 }, 00:24:24.925 { 00:24:24.925 "name": "BaseBdev2", 00:24:24.925 "uuid": "e133b9fb-36bf-4cf0-b92f-5bfce8cda107", 00:24:24.925 "is_configured": true, 00:24:24.925 "data_offset": 0, 00:24:24.925 "data_size": 65536 00:24:24.925 }, 00:24:24.925 { 00:24:24.925 "name": "BaseBdev3", 00:24:24.925 "uuid": "6556b6a1-4722-4b27-ab21-5dda69ccb131", 00:24:24.925 "is_configured": true, 00:24:24.925 "data_offset": 0, 00:24:24.925 "data_size": 65536 00:24:24.925 } 00:24:24.925 ] 00:24:24.925 }' 00:24:24.925 10:38:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.925 10:38:18 -- common/autotest_common.sh@10 -- # set +x 00:24:25.861 10:38:19 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:25.861 [2024-07-12 10:38:19.624535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.861 [2024-07-12 10:38:19.624679] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:25.861 [2024-07-12 10:38:19.624868] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.861 [2024-07-12 10:38:19.625084] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.861 [2024-07-12 10:38:19.625191] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:25.861 10:38:19 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.861 10:38:19 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:26.119 10:38:19 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:26.119 10:38:19 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:26.119 10:38:19 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@12 -- # local i 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:26.119 10:38:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:26.120 10:38:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:26.378 /dev/nbd0 00:24:26.378 10:38:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:26.378 10:38:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:26.378 10:38:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:26.378 10:38:20 -- common/autotest_common.sh@857 -- # local i 00:24:26.378 10:38:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:26.378 10:38:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:26.378 10:38:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:26.378 10:38:20 -- common/autotest_common.sh@861 -- # break 00:24:26.378 10:38:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:26.378 10:38:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:26.378 10:38:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.378 1+0 records in 00:24:26.378 1+0 records out 00:24:26.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255551 s, 16.0 MB/s 00:24:26.378 10:38:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.378 10:38:20 -- common/autotest_common.sh@874 -- # size=4096 00:24:26.378 10:38:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.378 10:38:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:26.378 10:38:20 -- common/autotest_common.sh@877 -- # return 0 00:24:26.378 10:38:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:26.378 10:38:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:26.378 10:38:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:26.637 /dev/nbd1 00:24:26.637 10:38:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:26.637 10:38:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:26.637 10:38:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:26.637 10:38:20 -- common/autotest_common.sh@857 -- # local i 00:24:26.637 10:38:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:26.637 10:38:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:26.637 10:38:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:26.637 10:38:20 -- common/autotest_common.sh@861 -- # break 00:24:26.637 10:38:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:26.637 10:38:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:26.637 10:38:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.637 1+0 records in 00:24:26.637 1+0 records out 00:24:26.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334986 s, 12.2 MB/s 00:24:26.637 10:38:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.637 10:38:20 -- common/autotest_common.sh@874 -- # size=4096 00:24:26.637 10:38:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.637 10:38:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:26.637 10:38:20 -- common/autotest_common.sh@877 -- # return 0 00:24:26.637 10:38:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:26.637 10:38:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:26.637 10:38:20 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:26.896 10:38:20 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@51 -- # local i 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:26.896 10:38:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:27.155 10:38:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:27.155 10:38:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:27.155 10:38:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:27.155 10:38:20 -- bdev/nbd_common.sh@41 -- # break 00:24:27.156 10:38:20 -- bdev/nbd_common.sh@45 -- # return 0 00:24:27.156 10:38:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:27.156 10:38:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:27.156 10:38:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@41 -- # break 00:24:27.415 10:38:21 -- bdev/nbd_common.sh@45 -- # return 0 00:24:27.415 10:38:21 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:27.415 10:38:21 -- bdev/bdev_raid.sh@709 -- # killprocess 132050 00:24:27.415 10:38:21 -- common/autotest_common.sh@926 -- # '[' -z 132050 ']' 00:24:27.415 10:38:21 -- common/autotest_common.sh@930 -- # kill -0 132050 00:24:27.415 10:38:21 -- common/autotest_common.sh@931 -- # uname 00:24:27.415 10:38:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.415 10:38:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132050 00:24:27.415 killing process with pid 132050 00:24:27.415 Received shutdown signal, test time was about 60.000000 seconds 00:24:27.415 00:24:27.415 Latency(us) 00:24:27.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.415 =================================================================================================================== 00:24:27.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:27.415 10:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:27.415 10:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:27.415 10:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132050' 00:24:27.415 10:38:21 -- common/autotest_common.sh@945 -- # kill 132050 00:24:27.415 10:38:21 -- common/autotest_common.sh@950 -- # wait 132050 00:24:27.415 [2024-07-12 10:38:21.196431] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.674 [2024-07-12 10:38:21.455571] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.611 ************************************ 00:24:28.611 END TEST raid5f_rebuild_test 00:24:28.611 ************************************ 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:28.611 00:24:28.611 real 0m20.720s 00:24:28.611 user 0m31.285s 00:24:28.611 sys 0m2.144s 00:24:28.611 10:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.611 10:38:22 -- common/autotest_common.sh@10 -- # set +x 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:28.611 10:38:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:28.611 10:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:28.611 10:38:22 -- common/autotest_common.sh@10 -- # set +x 00:24:28.611 ************************************ 00:24:28.611 START TEST raid5f_rebuild_test_sb 00:24:28.611 ************************************ 00:24:28.611 10:38:22 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:28.611 10:38:22 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@544 -- # raid_pid=132635 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132635 /var/tmp/spdk-raid.sock 00:24:28.870 10:38:22 -- common/autotest_common.sh@819 -- # '[' -z 132635 ']' 00:24:28.870 10:38:22 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:28.870 10:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:28.870 10:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.870 10:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:28.870 10:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.870 10:38:22 -- common/autotest_common.sh@10 -- # set +x 00:24:28.870 [2024-07-12 10:38:22.589722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:28.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:28.870 Zero copy mechanism will not be used. 00:24:28.870 [2024-07-12 10:38:22.589906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132635 ] 00:24:28.870 [2024-07-12 10:38:22.754949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.130 [2024-07-12 10:38:22.931933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.388 [2024-07-12 10:38:23.117300] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.646 10:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:29.646 10:38:23 -- common/autotest_common.sh@852 -- # return 0 00:24:29.646 10:38:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:29.646 10:38:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:29.646 10:38:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:29.904 BaseBdev1_malloc 00:24:29.904 10:38:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:30.162 [2024-07-12 10:38:23.910686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:30.162 [2024-07-12 10:38:23.910788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.162 [2024-07-12 10:38:23.910824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:30.162 [2024-07-12 10:38:23.910867] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.162 [2024-07-12 10:38:23.913075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.162 [2024-07-12 10:38:23.913121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:30.162 BaseBdev1 00:24:30.162 10:38:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:30.162 10:38:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:30.162 10:38:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:30.421 BaseBdev2_malloc 00:24:30.421 10:38:24 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:30.421 [2024-07-12 10:38:24.322730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:30.421 [2024-07-12 10:38:24.322791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.421 [2024-07-12 10:38:24.322828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:30.421 [2024-07-12 10:38:24.322877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.421 [2024-07-12 10:38:24.324985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.421 [2024-07-12 10:38:24.325040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:30.421 BaseBdev2 00:24:30.680 10:38:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:30.680 10:38:24 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:30.680 10:38:24 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:30.680 BaseBdev3_malloc 00:24:30.680 10:38:24 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:30.938 [2024-07-12 10:38:24.719250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:30.938 [2024-07-12 10:38:24.719311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.938 [2024-07-12 10:38:24.719356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:30.938 [2024-07-12 10:38:24.719398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.938 [2024-07-12 10:38:24.721552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.938 [2024-07-12 10:38:24.721601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:30.938 BaseBdev3 00:24:30.939 10:38:24 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:31.197 spare_malloc 00:24:31.197 10:38:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:31.456 spare_delay 00:24:31.456 10:38:25 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:31.456 [2024-07-12 10:38:25.347972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:31.456 [2024-07-12 10:38:25.348045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.456 [2024-07-12 10:38:25.348076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:31.456 [2024-07-12 10:38:25.348114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.456 [2024-07-12 10:38:25.351046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.456 [2024-07-12 10:38:25.351101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:31.456 spare 00:24:31.456 10:38:25 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:31.715 [2024-07-12 10:38:25.528087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.715 [2024-07-12 10:38:25.529923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:31.715 [2024-07-12 10:38:25.529993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:31.715 [2024-07-12 10:38:25.530178] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:31.715 [2024-07-12 10:38:25.530191] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:31.715 [2024-07-12 10:38:25.530308] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:31.715 [2024-07-12 10:38:25.534501] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:31.715 [2024-07-12 10:38:25.534524] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:31.715 [2024-07-12 10:38:25.534668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.715 10:38:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.974 10:38:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.974 "name": "raid_bdev1", 00:24:31.974 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:31.974 "strip_size_kb": 64, 00:24:31.974 "state": "online", 00:24:31.974 "raid_level": "raid5f", 00:24:31.974 "superblock": true, 00:24:31.974 "num_base_bdevs": 3, 00:24:31.974 "num_base_bdevs_discovered": 3, 00:24:31.974 "num_base_bdevs_operational": 3, 00:24:31.974 "base_bdevs_list": [ 00:24:31.974 { 00:24:31.974 "name": "BaseBdev1", 00:24:31.974 "uuid": "e95a4120-00b2-5a2e-bf22-aaa4d6a4c19b", 00:24:31.974 "is_configured": true, 00:24:31.974 "data_offset": 2048, 00:24:31.974 "data_size": 63488 00:24:31.974 }, 00:24:31.974 { 00:24:31.974 "name": "BaseBdev2", 00:24:31.974 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:31.974 "is_configured": true, 00:24:31.974 "data_offset": 2048, 00:24:31.974 "data_size": 63488 00:24:31.974 }, 00:24:31.974 { 00:24:31.974 "name": "BaseBdev3", 00:24:31.974 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:31.974 "is_configured": true, 00:24:31.974 "data_offset": 2048, 00:24:31.974 "data_size": 63488 00:24:31.974 } 00:24:31.974 ] 00:24:31.974 }' 00:24:31.974 10:38:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.974 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:24:32.541 10:38:26 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:32.541 10:38:26 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:32.800 [2024-07-12 10:38:26.583777] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:32.800 10:38:26 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:32.800 10:38:26 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.800 10:38:26 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:33.058 10:38:26 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:33.058 10:38:26 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:33.058 10:38:26 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:33.058 10:38:26 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:33.058 10:38:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.058 10:38:26 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:33.058 10:38:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@12 -- # local i 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:33.059 10:38:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:33.317 [2024-07-12 10:38:27.011768] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:33.317 /dev/nbd0 00:24:33.317 10:38:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:33.317 10:38:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:33.317 10:38:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:33.317 10:38:27 -- common/autotest_common.sh@857 -- # local i 00:24:33.317 10:38:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:33.317 10:38:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:33.317 10:38:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:33.317 10:38:27 -- common/autotest_common.sh@861 -- # break 00:24:33.317 10:38:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:33.317 10:38:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:33.317 10:38:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:33.317 1+0 records in 00:24:33.317 1+0 records out 00:24:33.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300233 s, 13.6 MB/s 00:24:33.317 10:38:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:33.317 10:38:27 -- common/autotest_common.sh@874 -- # size=4096 00:24:33.317 10:38:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:33.317 10:38:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:33.317 10:38:27 -- common/autotest_common.sh@877 -- # return 0 00:24:33.317 10:38:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:33.317 10:38:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:33.317 10:38:27 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:33.317 10:38:27 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:33.317 10:38:27 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:33.317 10:38:27 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:33.576 496+0 records in 00:24:33.577 496+0 records out 00:24:33.577 65011712 bytes (65 MB, 62 MiB) copied, 0.344642 s, 189 MB/s 00:24:33.577 10:38:27 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@51 -- # local i 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.577 10:38:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.836 [2024-07-12 10:38:27.626957] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@41 -- # break 00:24:33.836 10:38:27 -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.836 10:38:27 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:34.094 [2024-07-12 10:38:27.964132] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.094 10:38:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.095 10:38:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.353 10:38:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.353 "name": "raid_bdev1", 00:24:34.353 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:34.353 "strip_size_kb": 64, 00:24:34.353 "state": "online", 00:24:34.353 "raid_level": "raid5f", 00:24:34.353 "superblock": true, 00:24:34.353 "num_base_bdevs": 3, 00:24:34.353 "num_base_bdevs_discovered": 2, 00:24:34.353 "num_base_bdevs_operational": 2, 00:24:34.353 "base_bdevs_list": [ 00:24:34.353 { 00:24:34.353 "name": null, 00:24:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.353 "is_configured": false, 00:24:34.353 "data_offset": 2048, 00:24:34.353 "data_size": 63488 00:24:34.353 }, 00:24:34.353 { 00:24:34.353 "name": "BaseBdev2", 00:24:34.353 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:34.353 "is_configured": true, 00:24:34.353 "data_offset": 2048, 00:24:34.353 "data_size": 63488 00:24:34.353 }, 00:24:34.353 { 00:24:34.353 "name": "BaseBdev3", 00:24:34.353 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:34.353 "is_configured": true, 00:24:34.353 "data_offset": 2048, 00:24:34.353 "data_size": 63488 00:24:34.353 } 00:24:34.353 ] 00:24:34.353 }' 00:24:34.353 10:38:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.353 10:38:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.920 10:38:28 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:35.179 [2024-07-12 10:38:29.056333] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:35.179 [2024-07-12 10:38:29.056377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:35.179 [2024-07-12 10:38:29.067687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:24:35.179 [2024-07-12 10:38:29.073384] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:35.179 10:38:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.555 10:38:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.555 "name": "raid_bdev1", 00:24:36.555 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:36.555 "strip_size_kb": 64, 00:24:36.555 "state": "online", 00:24:36.555 "raid_level": "raid5f", 00:24:36.555 "superblock": true, 00:24:36.555 "num_base_bdevs": 3, 00:24:36.556 "num_base_bdevs_discovered": 3, 00:24:36.556 "num_base_bdevs_operational": 3, 00:24:36.556 "process": { 00:24:36.556 "type": "rebuild", 00:24:36.556 "target": "spare", 00:24:36.556 "progress": { 00:24:36.556 "blocks": 22528, 00:24:36.556 "percent": 17 00:24:36.556 } 00:24:36.556 }, 00:24:36.556 "base_bdevs_list": [ 00:24:36.556 { 00:24:36.556 "name": "spare", 00:24:36.556 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:36.556 "is_configured": true, 00:24:36.556 "data_offset": 2048, 00:24:36.556 "data_size": 63488 00:24:36.556 }, 00:24:36.556 { 00:24:36.556 "name": "BaseBdev2", 00:24:36.556 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:36.556 "is_configured": true, 00:24:36.556 "data_offset": 2048, 00:24:36.556 "data_size": 63488 00:24:36.556 }, 00:24:36.556 { 00:24:36.556 "name": "BaseBdev3", 00:24:36.556 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:36.556 "is_configured": true, 00:24:36.556 "data_offset": 2048, 00:24:36.556 "data_size": 63488 00:24:36.556 } 00:24:36.556 ] 00:24:36.556 }' 00:24:36.556 10:38:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.556 10:38:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:36.556 10:38:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.556 10:38:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:36.556 10:38:30 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:36.815 [2024-07-12 10:38:30.594450] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:36.815 [2024-07-12 10:38:30.687355] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:36.815 [2024-07-12 10:38:30.687413] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.815 10:38:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.073 10:38:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.073 "name": "raid_bdev1", 00:24:37.073 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:37.073 "strip_size_kb": 64, 00:24:37.073 "state": "online", 00:24:37.073 "raid_level": "raid5f", 00:24:37.073 "superblock": true, 00:24:37.073 "num_base_bdevs": 3, 00:24:37.073 "num_base_bdevs_discovered": 2, 00:24:37.073 "num_base_bdevs_operational": 2, 00:24:37.073 "base_bdevs_list": [ 00:24:37.073 { 00:24:37.073 "name": null, 00:24:37.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.073 "is_configured": false, 00:24:37.073 "data_offset": 2048, 00:24:37.073 "data_size": 63488 00:24:37.073 }, 00:24:37.073 { 00:24:37.073 "name": "BaseBdev2", 00:24:37.073 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:37.073 "is_configured": true, 00:24:37.073 "data_offset": 2048, 00:24:37.073 "data_size": 63488 00:24:37.073 }, 00:24:37.073 { 00:24:37.073 "name": "BaseBdev3", 00:24:37.073 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:37.073 "is_configured": true, 00:24:37.073 "data_offset": 2048, 00:24:37.073 "data_size": 63488 00:24:37.073 } 00:24:37.073 ] 00:24:37.073 }' 00:24:37.073 10:38:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.073 10:38:30 -- common/autotest_common.sh@10 -- # set +x 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.010 "name": "raid_bdev1", 00:24:38.010 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:38.010 "strip_size_kb": 64, 00:24:38.010 "state": "online", 00:24:38.010 "raid_level": "raid5f", 00:24:38.010 "superblock": true, 00:24:38.010 "num_base_bdevs": 3, 00:24:38.010 "num_base_bdevs_discovered": 2, 00:24:38.010 "num_base_bdevs_operational": 2, 00:24:38.010 "base_bdevs_list": [ 00:24:38.010 { 00:24:38.010 "name": null, 00:24:38.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.010 "is_configured": false, 00:24:38.010 "data_offset": 2048, 00:24:38.010 "data_size": 63488 00:24:38.010 }, 00:24:38.010 { 00:24:38.010 "name": "BaseBdev2", 00:24:38.010 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:38.010 "is_configured": true, 00:24:38.010 "data_offset": 2048, 00:24:38.010 "data_size": 63488 00:24:38.010 }, 00:24:38.010 { 00:24:38.010 "name": "BaseBdev3", 00:24:38.010 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:38.010 "is_configured": true, 00:24:38.010 "data_offset": 2048, 00:24:38.010 "data_size": 63488 00:24:38.010 } 00:24:38.010 ] 00:24:38.010 }' 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:38.010 10:38:31 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.268 [2024-07-12 10:38:32.108841] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:38.268 [2024-07-12 10:38:32.108875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.268 [2024-07-12 10:38:32.118326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:24:38.268 [2024-07-12 10:38:32.123947] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:38.268 10:38:32 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.699 "name": "raid_bdev1", 00:24:39.699 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:39.699 "strip_size_kb": 64, 00:24:39.699 "state": "online", 00:24:39.699 "raid_level": "raid5f", 00:24:39.699 "superblock": true, 00:24:39.699 "num_base_bdevs": 3, 00:24:39.699 "num_base_bdevs_discovered": 3, 00:24:39.699 "num_base_bdevs_operational": 3, 00:24:39.699 "process": { 00:24:39.699 "type": "rebuild", 00:24:39.699 "target": "spare", 00:24:39.699 "progress": { 00:24:39.699 "blocks": 24576, 00:24:39.699 "percent": 19 00:24:39.699 } 00:24:39.699 }, 00:24:39.699 "base_bdevs_list": [ 00:24:39.699 { 00:24:39.699 "name": "spare", 00:24:39.699 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:39.699 "is_configured": true, 00:24:39.699 "data_offset": 2048, 00:24:39.699 "data_size": 63488 00:24:39.699 }, 00:24:39.699 { 00:24:39.699 "name": "BaseBdev2", 00:24:39.699 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:39.699 "is_configured": true, 00:24:39.699 "data_offset": 2048, 00:24:39.699 "data_size": 63488 00:24:39.699 }, 00:24:39.699 { 00:24:39.699 "name": "BaseBdev3", 00:24:39.699 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:39.699 "is_configured": true, 00:24:39.699 "data_offset": 2048, 00:24:39.699 "data_size": 63488 00:24:39.699 } 00:24:39.699 ] 00:24:39.699 }' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:39.699 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@657 -- # local timeout=626 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.699 10:38:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.965 "name": "raid_bdev1", 00:24:39.965 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:39.965 "strip_size_kb": 64, 00:24:39.965 "state": "online", 00:24:39.965 "raid_level": "raid5f", 00:24:39.965 "superblock": true, 00:24:39.965 "num_base_bdevs": 3, 00:24:39.965 "num_base_bdevs_discovered": 3, 00:24:39.965 "num_base_bdevs_operational": 3, 00:24:39.965 "process": { 00:24:39.965 "type": "rebuild", 00:24:39.965 "target": "spare", 00:24:39.965 "progress": { 00:24:39.965 "blocks": 30720, 00:24:39.965 "percent": 24 00:24:39.965 } 00:24:39.965 }, 00:24:39.965 "base_bdevs_list": [ 00:24:39.965 { 00:24:39.965 "name": "spare", 00:24:39.965 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "name": "BaseBdev2", 00:24:39.965 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "name": "BaseBdev3", 00:24:39.965 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }' 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.965 10:38:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.901 10:38:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.160 10:38:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:41.160 "name": "raid_bdev1", 00:24:41.160 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:41.160 "strip_size_kb": 64, 00:24:41.160 "state": "online", 00:24:41.160 "raid_level": "raid5f", 00:24:41.160 "superblock": true, 00:24:41.160 "num_base_bdevs": 3, 00:24:41.160 "num_base_bdevs_discovered": 3, 00:24:41.160 "num_base_bdevs_operational": 3, 00:24:41.160 "process": { 00:24:41.160 "type": "rebuild", 00:24:41.160 "target": "spare", 00:24:41.160 "progress": { 00:24:41.160 "blocks": 57344, 00:24:41.160 "percent": 45 00:24:41.160 } 00:24:41.160 }, 00:24:41.160 "base_bdevs_list": [ 00:24:41.160 { 00:24:41.160 "name": "spare", 00:24:41.160 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:41.160 "is_configured": true, 00:24:41.160 "data_offset": 2048, 00:24:41.160 "data_size": 63488 00:24:41.160 }, 00:24:41.160 { 00:24:41.160 "name": "BaseBdev2", 00:24:41.160 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:41.160 "is_configured": true, 00:24:41.160 "data_offset": 2048, 00:24:41.160 "data_size": 63488 00:24:41.160 }, 00:24:41.160 { 00:24:41.160 "name": "BaseBdev3", 00:24:41.160 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:41.160 "is_configured": true, 00:24:41.160 "data_offset": 2048, 00:24:41.160 "data_size": 63488 00:24:41.160 } 00:24:41.160 ] 00:24:41.160 }' 00:24:41.160 10:38:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:41.160 10:38:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.160 10:38:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:41.418 10:38:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.418 10:38:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:42.351 10:38:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.352 10:38:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:42.610 "name": "raid_bdev1", 00:24:42.610 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:42.610 "strip_size_kb": 64, 00:24:42.610 "state": "online", 00:24:42.610 "raid_level": "raid5f", 00:24:42.610 "superblock": true, 00:24:42.610 "num_base_bdevs": 3, 00:24:42.610 "num_base_bdevs_discovered": 3, 00:24:42.610 "num_base_bdevs_operational": 3, 00:24:42.610 "process": { 00:24:42.610 "type": "rebuild", 00:24:42.610 "target": "spare", 00:24:42.610 "progress": { 00:24:42.610 "blocks": 86016, 00:24:42.610 "percent": 67 00:24:42.610 } 00:24:42.610 }, 00:24:42.610 "base_bdevs_list": [ 00:24:42.610 { 00:24:42.610 "name": "spare", 00:24:42.610 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:42.610 "is_configured": true, 00:24:42.610 "data_offset": 2048, 00:24:42.610 "data_size": 63488 00:24:42.610 }, 00:24:42.610 { 00:24:42.610 "name": "BaseBdev2", 00:24:42.610 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:42.610 "is_configured": true, 00:24:42.610 "data_offset": 2048, 00:24:42.610 "data_size": 63488 00:24:42.610 }, 00:24:42.610 { 00:24:42.610 "name": "BaseBdev3", 00:24:42.610 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:42.610 "is_configured": true, 00:24:42.610 "data_offset": 2048, 00:24:42.610 "data_size": 63488 00:24:42.610 } 00:24:42.610 ] 00:24:42.610 }' 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.610 10:38:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:43.987 "name": "raid_bdev1", 00:24:43.987 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:43.987 "strip_size_kb": 64, 00:24:43.987 "state": "online", 00:24:43.987 "raid_level": "raid5f", 00:24:43.987 "superblock": true, 00:24:43.987 "num_base_bdevs": 3, 00:24:43.987 "num_base_bdevs_discovered": 3, 00:24:43.987 "num_base_bdevs_operational": 3, 00:24:43.987 "process": { 00:24:43.987 "type": "rebuild", 00:24:43.987 "target": "spare", 00:24:43.987 "progress": { 00:24:43.987 "blocks": 112640, 00:24:43.987 "percent": 88 00:24:43.987 } 00:24:43.987 }, 00:24:43.987 "base_bdevs_list": [ 00:24:43.987 { 00:24:43.987 "name": "spare", 00:24:43.987 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:43.987 "is_configured": true, 00:24:43.987 "data_offset": 2048, 00:24:43.987 "data_size": 63488 00:24:43.987 }, 00:24:43.987 { 00:24:43.987 "name": "BaseBdev2", 00:24:43.987 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:43.987 "is_configured": true, 00:24:43.987 "data_offset": 2048, 00:24:43.987 "data_size": 63488 00:24:43.987 }, 00:24:43.987 { 00:24:43.987 "name": "BaseBdev3", 00:24:43.987 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:43.987 "is_configured": true, 00:24:43.987 "data_offset": 2048, 00:24:43.987 "data_size": 63488 00:24:43.987 } 00:24:43.987 ] 00:24:43.987 }' 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.987 10:38:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:44.554 [2024-07-12 10:38:38.377495] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:44.554 [2024-07-12 10:38:38.377582] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:44.554 [2024-07-12 10:38:38.377718] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.121 10:38:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.121 10:38:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.121 "name": "raid_bdev1", 00:24:45.121 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:45.121 "strip_size_kb": 64, 00:24:45.121 "state": "online", 00:24:45.121 "raid_level": "raid5f", 00:24:45.121 "superblock": true, 00:24:45.121 "num_base_bdevs": 3, 00:24:45.121 "num_base_bdevs_discovered": 3, 00:24:45.121 "num_base_bdevs_operational": 3, 00:24:45.121 "base_bdevs_list": [ 00:24:45.121 { 00:24:45.121 "name": "spare", 00:24:45.121 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:45.121 "is_configured": true, 00:24:45.121 "data_offset": 2048, 00:24:45.121 "data_size": 63488 00:24:45.121 }, 00:24:45.121 { 00:24:45.121 "name": "BaseBdev2", 00:24:45.121 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:45.121 "is_configured": true, 00:24:45.121 "data_offset": 2048, 00:24:45.122 "data_size": 63488 00:24:45.122 }, 00:24:45.122 { 00:24:45.122 "name": "BaseBdev3", 00:24:45.122 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:45.122 "is_configured": true, 00:24:45.122 "data_offset": 2048, 00:24:45.122 "data_size": 63488 00:24:45.122 } 00:24:45.122 ] 00:24:45.122 }' 00:24:45.122 10:38:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@660 -- # break 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.380 10:38:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.639 "name": "raid_bdev1", 00:24:45.639 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:45.639 "strip_size_kb": 64, 00:24:45.639 "state": "online", 00:24:45.639 "raid_level": "raid5f", 00:24:45.639 "superblock": true, 00:24:45.639 "num_base_bdevs": 3, 00:24:45.639 "num_base_bdevs_discovered": 3, 00:24:45.639 "num_base_bdevs_operational": 3, 00:24:45.639 "base_bdevs_list": [ 00:24:45.639 { 00:24:45.639 "name": "spare", 00:24:45.639 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:45.639 "is_configured": true, 00:24:45.639 "data_offset": 2048, 00:24:45.639 "data_size": 63488 00:24:45.639 }, 00:24:45.639 { 00:24:45.639 "name": "BaseBdev2", 00:24:45.639 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:45.639 "is_configured": true, 00:24:45.639 "data_offset": 2048, 00:24:45.639 "data_size": 63488 00:24:45.639 }, 00:24:45.639 { 00:24:45.639 "name": "BaseBdev3", 00:24:45.639 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:45.639 "is_configured": true, 00:24:45.639 "data_offset": 2048, 00:24:45.639 "data_size": 63488 00:24:45.639 } 00:24:45.639 ] 00:24:45.639 }' 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.639 10:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.898 10:38:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.898 "name": "raid_bdev1", 00:24:45.898 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:45.898 "strip_size_kb": 64, 00:24:45.898 "state": "online", 00:24:45.898 "raid_level": "raid5f", 00:24:45.898 "superblock": true, 00:24:45.898 "num_base_bdevs": 3, 00:24:45.898 "num_base_bdevs_discovered": 3, 00:24:45.898 "num_base_bdevs_operational": 3, 00:24:45.898 "base_bdevs_list": [ 00:24:45.898 { 00:24:45.898 "name": "spare", 00:24:45.898 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:45.898 "is_configured": true, 00:24:45.898 "data_offset": 2048, 00:24:45.898 "data_size": 63488 00:24:45.898 }, 00:24:45.898 { 00:24:45.898 "name": "BaseBdev2", 00:24:45.898 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:45.898 "is_configured": true, 00:24:45.898 "data_offset": 2048, 00:24:45.898 "data_size": 63488 00:24:45.898 }, 00:24:45.898 { 00:24:45.898 "name": "BaseBdev3", 00:24:45.898 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:45.898 "is_configured": true, 00:24:45.898 "data_offset": 2048, 00:24:45.898 "data_size": 63488 00:24:45.898 } 00:24:45.898 ] 00:24:45.898 }' 00:24:45.898 10:38:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.898 10:38:39 -- common/autotest_common.sh@10 -- # set +x 00:24:46.833 10:38:40 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:46.833 [2024-07-12 10:38:40.668841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.833 [2024-07-12 10:38:40.668875] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.833 [2024-07-12 10:38:40.668944] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.833 [2024-07-12 10:38:40.669021] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.833 [2024-07-12 10:38:40.669034] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:46.833 10:38:40 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.833 10:38:40 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:47.092 10:38:40 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:47.092 10:38:40 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:47.092 10:38:40 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@12 -- # local i 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.092 10:38:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:47.351 /dev/nbd0 00:24:47.351 10:38:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:47.351 10:38:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:47.351 10:38:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:47.351 10:38:41 -- common/autotest_common.sh@857 -- # local i 00:24:47.351 10:38:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:47.351 10:38:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:47.351 10:38:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:47.351 10:38:41 -- common/autotest_common.sh@861 -- # break 00:24:47.351 10:38:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:47.351 10:38:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:47.351 10:38:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.351 1+0 records in 00:24:47.351 1+0 records out 00:24:47.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337598 s, 12.1 MB/s 00:24:47.351 10:38:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.351 10:38:41 -- common/autotest_common.sh@874 -- # size=4096 00:24:47.351 10:38:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.351 10:38:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:47.351 10:38:41 -- common/autotest_common.sh@877 -- # return 0 00:24:47.351 10:38:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.351 10:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.351 10:38:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:47.609 /dev/nbd1 00:24:47.609 10:38:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:47.609 10:38:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:47.609 10:38:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:47.609 10:38:41 -- common/autotest_common.sh@857 -- # local i 00:24:47.609 10:38:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:47.609 10:38:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:47.609 10:38:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:47.610 10:38:41 -- common/autotest_common.sh@861 -- # break 00:24:47.610 10:38:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:47.610 10:38:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:47.610 10:38:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.610 1+0 records in 00:24:47.610 1+0 records out 00:24:47.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361663 s, 11.3 MB/s 00:24:47.610 10:38:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.610 10:38:41 -- common/autotest_common.sh@874 -- # size=4096 00:24:47.610 10:38:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.610 10:38:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:47.610 10:38:41 -- common/autotest_common.sh@877 -- # return 0 00:24:47.610 10:38:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.610 10:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.610 10:38:41 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:47.867 10:38:41 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@51 -- # local i 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.867 10:38:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@41 -- # break 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@45 -- # return 0 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.126 10:38:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:48.384 10:38:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:48.641 10:38:42 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:48.641 10:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.641 10:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:48.641 10:38:42 -- bdev/nbd_common.sh@41 -- # break 00:24:48.641 10:38:42 -- bdev/nbd_common.sh@45 -- # return 0 00:24:48.641 10:38:42 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:48.641 10:38:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:48.641 10:38:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:48.641 10:38:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:48.641 10:38:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:48.897 [2024-07-12 10:38:42.744499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:48.897 [2024-07-12 10:38:42.744594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.897 [2024-07-12 10:38:42.744628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:48.897 [2024-07-12 10:38:42.744653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.897 [2024-07-12 10:38:42.746669] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.897 [2024-07-12 10:38:42.746733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:48.897 [2024-07-12 10:38:42.746827] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:48.897 [2024-07-12 10:38:42.746892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.897 BaseBdev1 00:24:48.897 10:38:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:48.897 10:38:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:48.897 10:38:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:49.155 10:38:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:49.412 [2024-07-12 10:38:43.096532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:49.412 [2024-07-12 10:38:43.096584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.412 [2024-07-12 10:38:43.096618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:49.412 [2024-07-12 10:38:43.096636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.412 [2024-07-12 10:38:43.097021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.412 [2024-07-12 10:38:43.097072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:49.412 [2024-07-12 10:38:43.097156] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:49.412 [2024-07-12 10:38:43.097170] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:49.412 [2024-07-12 10:38:43.097177] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.412 [2024-07-12 10:38:43.097193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:24:49.412 [2024-07-12 10:38:43.097258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:49.412 BaseBdev2 00:24:49.412 10:38:43 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:49.412 10:38:43 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:49.412 10:38:43 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:49.412 10:38:43 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:49.668 [2024-07-12 10:38:43.464587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:49.668 [2024-07-12 10:38:43.464644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.668 [2024-07-12 10:38:43.464679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:49.668 [2024-07-12 10:38:43.464697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.668 [2024-07-12 10:38:43.465052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.668 [2024-07-12 10:38:43.465105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:49.668 [2024-07-12 10:38:43.465178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:49.668 [2024-07-12 10:38:43.465200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:49.668 BaseBdev3 00:24:49.669 10:38:43 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:49.926 10:38:43 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:49.926 [2024-07-12 10:38:43.836294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:49.926 [2024-07-12 10:38:43.836363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.926 [2024-07-12 10:38:43.836396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:49.926 [2024-07-12 10:38:43.836423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.926 [2024-07-12 10:38:43.836962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.926 [2024-07-12 10:38:43.837016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:49.926 [2024-07-12 10:38:43.837107] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:49.926 [2024-07-12 10:38:43.837148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:50.182 spare 00:24:50.182 10:38:43 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:50.182 10:38:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:50.182 10:38:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:50.182 10:38:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:50.182 10:38:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.183 10:38:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.183 [2024-07-12 10:38:43.937260] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:24:50.183 [2024-07-12 10:38:43.937278] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:50.183 [2024-07-12 10:38:43.937384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004b590 00:24:50.183 [2024-07-12 10:38:43.941515] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:24:50.183 [2024-07-12 10:38:43.941537] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:24:50.183 [2024-07-12 10:38:43.941670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.183 10:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.183 "name": "raid_bdev1", 00:24:50.183 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:50.183 "strip_size_kb": 64, 00:24:50.183 "state": "online", 00:24:50.183 "raid_level": "raid5f", 00:24:50.183 "superblock": true, 00:24:50.183 "num_base_bdevs": 3, 00:24:50.183 "num_base_bdevs_discovered": 3, 00:24:50.183 "num_base_bdevs_operational": 3, 00:24:50.183 "base_bdevs_list": [ 00:24:50.183 { 00:24:50.183 "name": "spare", 00:24:50.183 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:50.183 "is_configured": true, 00:24:50.183 "data_offset": 2048, 00:24:50.183 "data_size": 63488 00:24:50.183 }, 00:24:50.183 { 00:24:50.183 "name": "BaseBdev2", 00:24:50.183 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:50.183 "is_configured": true, 00:24:50.183 "data_offset": 2048, 00:24:50.183 "data_size": 63488 00:24:50.183 }, 00:24:50.183 { 00:24:50.183 "name": "BaseBdev3", 00:24:50.183 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:50.183 "is_configured": true, 00:24:50.183 "data_offset": 2048, 00:24:50.183 "data_size": 63488 00:24:50.183 } 00:24:50.183 ] 00:24:50.183 }' 00:24:50.183 10:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.183 10:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.115 10:38:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.115 10:38:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:51.115 "name": "raid_bdev1", 00:24:51.115 "uuid": "0e1c8640-056e-4903-a45b-26994055d1dc", 00:24:51.115 "strip_size_kb": 64, 00:24:51.115 "state": "online", 00:24:51.115 "raid_level": "raid5f", 00:24:51.115 "superblock": true, 00:24:51.115 "num_base_bdevs": 3, 00:24:51.115 "num_base_bdevs_discovered": 3, 00:24:51.115 "num_base_bdevs_operational": 3, 00:24:51.115 "base_bdevs_list": [ 00:24:51.115 { 00:24:51.115 "name": "spare", 00:24:51.115 "uuid": "d196e521-6029-5d77-9366-cd1c17c38af0", 00:24:51.115 "is_configured": true, 00:24:51.115 "data_offset": 2048, 00:24:51.115 "data_size": 63488 00:24:51.115 }, 00:24:51.115 { 00:24:51.115 "name": "BaseBdev2", 00:24:51.115 "uuid": "4e8311d3-239b-5e28-99ed-5b984362abdf", 00:24:51.115 "is_configured": true, 00:24:51.115 "data_offset": 2048, 00:24:51.115 "data_size": 63488 00:24:51.115 }, 00:24:51.115 { 00:24:51.115 "name": "BaseBdev3", 00:24:51.115 "uuid": "0e95c764-baea-551f-8a4d-5a9deaf91976", 00:24:51.115 "is_configured": true, 00:24:51.115 "data_offset": 2048, 00:24:51.115 "data_size": 63488 00:24:51.115 } 00:24:51.115 ] 00:24:51.115 }' 00:24:51.115 10:38:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:51.374 10:38:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:51.374 10:38:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:51.374 10:38:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:51.374 10:38:45 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:51.374 10:38:45 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.633 10:38:45 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.633 10:38:45 -- bdev/bdev_raid.sh@709 -- # killprocess 132635 00:24:51.633 10:38:45 -- common/autotest_common.sh@926 -- # '[' -z 132635 ']' 00:24:51.633 10:38:45 -- common/autotest_common.sh@930 -- # kill -0 132635 00:24:51.633 10:38:45 -- common/autotest_common.sh@931 -- # uname 00:24:51.633 10:38:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:51.633 10:38:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132635 00:24:51.633 killing process with pid 132635 00:24:51.633 Received shutdown signal, test time was about 60.000000 seconds 00:24:51.633 00:24:51.633 Latency(us) 00:24:51.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.633 =================================================================================================================== 00:24:51.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:51.633 10:38:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:51.633 10:38:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:51.633 10:38:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132635' 00:24:51.633 10:38:45 -- common/autotest_common.sh@945 -- # kill 132635 00:24:51.633 10:38:45 -- common/autotest_common.sh@950 -- # wait 132635 00:24:51.634 [2024-07-12 10:38:45.315292] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.634 [2024-07-12 10:38:45.315348] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.634 [2024-07-12 10:38:45.315427] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.634 [2024-07-12 10:38:45.315456] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:24:51.893 [2024-07-12 10:38:45.578852] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.829 ************************************ 00:24:52.829 END TEST raid5f_rebuild_test_sb 00:24:52.829 ************************************ 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:52.829 00:24:52.829 real 0m24.063s 00:24:52.829 user 0m37.757s 00:24:52.829 sys 0m2.504s 00:24:52.829 10:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.829 10:38:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:52.829 10:38:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:52.829 10:38:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:52.829 10:38:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.829 ************************************ 00:24:52.829 START TEST raid5f_state_function_test 00:24:52.829 ************************************ 00:24:52.829 10:38:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=133326 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133326' 00:24:52.829 Process raid pid: 133326 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133326 /var/tmp/spdk-raid.sock 00:24:52.829 10:38:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:52.829 10:38:46 -- common/autotest_common.sh@819 -- # '[' -z 133326 ']' 00:24:52.829 10:38:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:52.829 10:38:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:52.829 10:38:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:52.829 10:38:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:52.829 10:38:46 -- common/autotest_common.sh@10 -- # set +x 00:24:52.829 [2024-07-12 10:38:46.714910] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:52.829 [2024-07-12 10:38:46.715102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.089 [2024-07-12 10:38:46.877815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.347 [2024-07-12 10:38:47.057639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.348 [2024-07-12 10:38:47.244214] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.915 10:38:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:53.915 10:38:47 -- common/autotest_common.sh@852 -- # return 0 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:53.915 [2024-07-12 10:38:47.771734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:53.915 [2024-07-12 10:38:47.771828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:53.915 [2024-07-12 10:38:47.771840] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:53.915 [2024-07-12 10:38:47.771861] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:53.915 [2024-07-12 10:38:47.771868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:53.915 [2024-07-12 10:38:47.771904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:53.915 [2024-07-12 10:38:47.771913] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:53.915 [2024-07-12 10:38:47.771938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.915 10:38:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.174 10:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.174 "name": "Existed_Raid", 00:24:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.174 "strip_size_kb": 64, 00:24:54.174 "state": "configuring", 00:24:54.174 "raid_level": "raid5f", 00:24:54.174 "superblock": false, 00:24:54.174 "num_base_bdevs": 4, 00:24:54.174 "num_base_bdevs_discovered": 0, 00:24:54.174 "num_base_bdevs_operational": 4, 00:24:54.174 "base_bdevs_list": [ 00:24:54.174 { 00:24:54.174 "name": "BaseBdev1", 00:24:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.174 "is_configured": false, 00:24:54.174 "data_offset": 0, 00:24:54.174 "data_size": 0 00:24:54.174 }, 00:24:54.174 { 00:24:54.174 "name": "BaseBdev2", 00:24:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.174 "is_configured": false, 00:24:54.174 "data_offset": 0, 00:24:54.174 "data_size": 0 00:24:54.174 }, 00:24:54.174 { 00:24:54.174 "name": "BaseBdev3", 00:24:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.174 "is_configured": false, 00:24:54.174 "data_offset": 0, 00:24:54.174 "data_size": 0 00:24:54.174 }, 00:24:54.174 { 00:24:54.174 "name": "BaseBdev4", 00:24:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.174 "is_configured": false, 00:24:54.174 "data_offset": 0, 00:24:54.174 "data_size": 0 00:24:54.174 } 00:24:54.174 ] 00:24:54.174 }' 00:24:54.174 10:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.174 10:38:48 -- common/autotest_common.sh@10 -- # set +x 00:24:54.742 10:38:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:55.000 [2024-07-12 10:38:48.811756] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:55.000 [2024-07-12 10:38:48.811785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:55.000 10:38:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:55.259 [2024-07-12 10:38:48.987809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:55.259 [2024-07-12 10:38:48.987851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:55.259 [2024-07-12 10:38:48.987860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:55.259 [2024-07-12 10:38:48.987890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:55.259 [2024-07-12 10:38:48.987898] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:55.259 [2024-07-12 10:38:48.987938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:55.259 [2024-07-12 10:38:48.987945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:55.259 [2024-07-12 10:38:48.987966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:55.259 10:38:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:55.517 [2024-07-12 10:38:49.269276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.517 BaseBdev1 00:24:55.517 10:38:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:55.517 10:38:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:55.517 10:38:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:55.517 10:38:49 -- common/autotest_common.sh@889 -- # local i 00:24:55.517 10:38:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:55.517 10:38:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:55.517 10:38:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:55.776 10:38:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:55.776 [ 00:24:55.776 { 00:24:55.776 "name": "BaseBdev1", 00:24:55.776 "aliases": [ 00:24:55.776 "f51b57df-c14a-45bb-98d2-488289cd7249" 00:24:55.776 ], 00:24:55.776 "product_name": "Malloc disk", 00:24:55.776 "block_size": 512, 00:24:55.776 "num_blocks": 65536, 00:24:55.776 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:24:55.776 "assigned_rate_limits": { 00:24:55.776 "rw_ios_per_sec": 0, 00:24:55.776 "rw_mbytes_per_sec": 0, 00:24:55.776 "r_mbytes_per_sec": 0, 00:24:55.776 "w_mbytes_per_sec": 0 00:24:55.776 }, 00:24:55.776 "claimed": true, 00:24:55.776 "claim_type": "exclusive_write", 00:24:55.776 "zoned": false, 00:24:55.776 "supported_io_types": { 00:24:55.776 "read": true, 00:24:55.776 "write": true, 00:24:55.776 "unmap": true, 00:24:55.776 "write_zeroes": true, 00:24:55.776 "flush": true, 00:24:55.776 "reset": true, 00:24:55.776 "compare": false, 00:24:55.776 "compare_and_write": false, 00:24:55.776 "abort": true, 00:24:55.776 "nvme_admin": false, 00:24:55.776 "nvme_io": false 00:24:55.776 }, 00:24:55.776 "memory_domains": [ 00:24:55.776 { 00:24:55.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.776 "dma_device_type": 2 00:24:55.776 } 00:24:55.776 ], 00:24:55.776 "driver_specific": {} 00:24:55.776 } 00:24:55.776 ] 00:24:55.776 10:38:49 -- common/autotest_common.sh@895 -- # return 0 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.776 10:38:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.035 10:38:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.035 "name": "Existed_Raid", 00:24:56.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.035 "strip_size_kb": 64, 00:24:56.035 "state": "configuring", 00:24:56.035 "raid_level": "raid5f", 00:24:56.035 "superblock": false, 00:24:56.035 "num_base_bdevs": 4, 00:24:56.035 "num_base_bdevs_discovered": 1, 00:24:56.035 "num_base_bdevs_operational": 4, 00:24:56.035 "base_bdevs_list": [ 00:24:56.035 { 00:24:56.035 "name": "BaseBdev1", 00:24:56.035 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:24:56.035 "is_configured": true, 00:24:56.035 "data_offset": 0, 00:24:56.035 "data_size": 65536 00:24:56.035 }, 00:24:56.035 { 00:24:56.035 "name": "BaseBdev2", 00:24:56.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.035 "is_configured": false, 00:24:56.035 "data_offset": 0, 00:24:56.035 "data_size": 0 00:24:56.035 }, 00:24:56.035 { 00:24:56.035 "name": "BaseBdev3", 00:24:56.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.035 "is_configured": false, 00:24:56.035 "data_offset": 0, 00:24:56.035 "data_size": 0 00:24:56.035 }, 00:24:56.035 { 00:24:56.035 "name": "BaseBdev4", 00:24:56.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.035 "is_configured": false, 00:24:56.035 "data_offset": 0, 00:24:56.035 "data_size": 0 00:24:56.035 } 00:24:56.035 ] 00:24:56.035 }' 00:24:56.035 10:38:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.035 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:24:56.603 10:38:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:56.861 [2024-07-12 10:38:50.605517] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:56.861 [2024-07-12 10:38:50.605551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:56.861 10:38:50 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:56.861 10:38:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:57.120 [2024-07-12 10:38:50.841605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.120 [2024-07-12 10:38:50.843420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.120 [2024-07-12 10:38:50.843494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.120 [2024-07-12 10:38:50.843505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:57.120 [2024-07-12 10:38:50.843529] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:57.120 [2024-07-12 10:38:50.843537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:57.120 [2024-07-12 10:38:50.843553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.120 10:38:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.379 10:38:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.379 "name": "Existed_Raid", 00:24:57.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.379 "strip_size_kb": 64, 00:24:57.379 "state": "configuring", 00:24:57.379 "raid_level": "raid5f", 00:24:57.379 "superblock": false, 00:24:57.379 "num_base_bdevs": 4, 00:24:57.379 "num_base_bdevs_discovered": 1, 00:24:57.379 "num_base_bdevs_operational": 4, 00:24:57.379 "base_bdevs_list": [ 00:24:57.379 { 00:24:57.379 "name": "BaseBdev1", 00:24:57.379 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:24:57.379 "is_configured": true, 00:24:57.379 "data_offset": 0, 00:24:57.379 "data_size": 65536 00:24:57.379 }, 00:24:57.379 { 00:24:57.379 "name": "BaseBdev2", 00:24:57.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.379 "is_configured": false, 00:24:57.379 "data_offset": 0, 00:24:57.379 "data_size": 0 00:24:57.379 }, 00:24:57.379 { 00:24:57.379 "name": "BaseBdev3", 00:24:57.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.379 "is_configured": false, 00:24:57.379 "data_offset": 0, 00:24:57.379 "data_size": 0 00:24:57.379 }, 00:24:57.379 { 00:24:57.379 "name": "BaseBdev4", 00:24:57.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.379 "is_configured": false, 00:24:57.379 "data_offset": 0, 00:24:57.379 "data_size": 0 00:24:57.379 } 00:24:57.379 ] 00:24:57.379 }' 00:24:57.379 10:38:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.379 10:38:51 -- common/autotest_common.sh@10 -- # set +x 00:24:57.946 10:38:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:57.946 [2024-07-12 10:38:51.838446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.946 BaseBdev2 00:24:57.946 10:38:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:57.946 10:38:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:57.946 10:38:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:57.946 10:38:51 -- common/autotest_common.sh@889 -- # local i 00:24:57.946 10:38:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:57.946 10:38:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:57.946 10:38:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:58.205 10:38:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:58.464 [ 00:24:58.464 { 00:24:58.464 "name": "BaseBdev2", 00:24:58.464 "aliases": [ 00:24:58.464 "3903a051-3d4b-4700-8048-97e3da598ccb" 00:24:58.464 ], 00:24:58.464 "product_name": "Malloc disk", 00:24:58.464 "block_size": 512, 00:24:58.464 "num_blocks": 65536, 00:24:58.464 "uuid": "3903a051-3d4b-4700-8048-97e3da598ccb", 00:24:58.464 "assigned_rate_limits": { 00:24:58.464 "rw_ios_per_sec": 0, 00:24:58.464 "rw_mbytes_per_sec": 0, 00:24:58.464 "r_mbytes_per_sec": 0, 00:24:58.464 "w_mbytes_per_sec": 0 00:24:58.464 }, 00:24:58.464 "claimed": true, 00:24:58.464 "claim_type": "exclusive_write", 00:24:58.464 "zoned": false, 00:24:58.464 "supported_io_types": { 00:24:58.464 "read": true, 00:24:58.464 "write": true, 00:24:58.464 "unmap": true, 00:24:58.464 "write_zeroes": true, 00:24:58.464 "flush": true, 00:24:58.464 "reset": true, 00:24:58.464 "compare": false, 00:24:58.464 "compare_and_write": false, 00:24:58.464 "abort": true, 00:24:58.464 "nvme_admin": false, 00:24:58.464 "nvme_io": false 00:24:58.464 }, 00:24:58.464 "memory_domains": [ 00:24:58.464 { 00:24:58.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.464 "dma_device_type": 2 00:24:58.464 } 00:24:58.464 ], 00:24:58.464 "driver_specific": {} 00:24:58.464 } 00:24:58.464 ] 00:24:58.464 10:38:52 -- common/autotest_common.sh@895 -- # return 0 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.464 10:38:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.465 "name": "Existed_Raid", 00:24:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.465 "strip_size_kb": 64, 00:24:58.465 "state": "configuring", 00:24:58.465 "raid_level": "raid5f", 00:24:58.465 "superblock": false, 00:24:58.465 "num_base_bdevs": 4, 00:24:58.465 "num_base_bdevs_discovered": 2, 00:24:58.465 "num_base_bdevs_operational": 4, 00:24:58.465 "base_bdevs_list": [ 00:24:58.465 { 00:24:58.465 "name": "BaseBdev1", 00:24:58.465 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:24:58.465 "is_configured": true, 00:24:58.465 "data_offset": 0, 00:24:58.465 "data_size": 65536 00:24:58.465 }, 00:24:58.465 { 00:24:58.465 "name": "BaseBdev2", 00:24:58.465 "uuid": "3903a051-3d4b-4700-8048-97e3da598ccb", 00:24:58.465 "is_configured": true, 00:24:58.465 "data_offset": 0, 00:24:58.465 "data_size": 65536 00:24:58.465 }, 00:24:58.465 { 00:24:58.465 "name": "BaseBdev3", 00:24:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.465 "is_configured": false, 00:24:58.465 "data_offset": 0, 00:24:58.465 "data_size": 0 00:24:58.465 }, 00:24:58.465 { 00:24:58.465 "name": "BaseBdev4", 00:24:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.465 "is_configured": false, 00:24:58.465 "data_offset": 0, 00:24:58.465 "data_size": 0 00:24:58.465 } 00:24:58.465 ] 00:24:58.465 }' 00:24:58.465 10:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.465 10:38:52 -- common/autotest_common.sh@10 -- # set +x 00:24:59.400 10:38:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:59.400 [2024-07-12 10:38:53.213870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:59.400 BaseBdev3 00:24:59.400 10:38:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:59.400 10:38:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:59.400 10:38:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:59.400 10:38:53 -- common/autotest_common.sh@889 -- # local i 00:24:59.400 10:38:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:59.400 10:38:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:59.400 10:38:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:59.658 10:38:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:59.917 [ 00:24:59.917 { 00:24:59.917 "name": "BaseBdev3", 00:24:59.917 "aliases": [ 00:24:59.917 "6a138e96-264c-4d76-a2f6-18a18cc9d484" 00:24:59.917 ], 00:24:59.917 "product_name": "Malloc disk", 00:24:59.917 "block_size": 512, 00:24:59.917 "num_blocks": 65536, 00:24:59.917 "uuid": "6a138e96-264c-4d76-a2f6-18a18cc9d484", 00:24:59.917 "assigned_rate_limits": { 00:24:59.917 "rw_ios_per_sec": 0, 00:24:59.917 "rw_mbytes_per_sec": 0, 00:24:59.917 "r_mbytes_per_sec": 0, 00:24:59.917 "w_mbytes_per_sec": 0 00:24:59.917 }, 00:24:59.917 "claimed": true, 00:24:59.917 "claim_type": "exclusive_write", 00:24:59.917 "zoned": false, 00:24:59.917 "supported_io_types": { 00:24:59.917 "read": true, 00:24:59.917 "write": true, 00:24:59.917 "unmap": true, 00:24:59.917 "write_zeroes": true, 00:24:59.917 "flush": true, 00:24:59.917 "reset": true, 00:24:59.917 "compare": false, 00:24:59.917 "compare_and_write": false, 00:24:59.917 "abort": true, 00:24:59.917 "nvme_admin": false, 00:24:59.917 "nvme_io": false 00:24:59.917 }, 00:24:59.917 "memory_domains": [ 00:24:59.917 { 00:24:59.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.917 "dma_device_type": 2 00:24:59.917 } 00:24:59.917 ], 00:24:59.917 "driver_specific": {} 00:24:59.917 } 00:24:59.917 ] 00:24:59.917 10:38:53 -- common/autotest_common.sh@895 -- # return 0 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.917 "name": "Existed_Raid", 00:24:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.917 "strip_size_kb": 64, 00:24:59.917 "state": "configuring", 00:24:59.917 "raid_level": "raid5f", 00:24:59.917 "superblock": false, 00:24:59.917 "num_base_bdevs": 4, 00:24:59.917 "num_base_bdevs_discovered": 3, 00:24:59.917 "num_base_bdevs_operational": 4, 00:24:59.917 "base_bdevs_list": [ 00:24:59.917 { 00:24:59.917 "name": "BaseBdev1", 00:24:59.917 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:24:59.917 "is_configured": true, 00:24:59.917 "data_offset": 0, 00:24:59.917 "data_size": 65536 00:24:59.917 }, 00:24:59.917 { 00:24:59.917 "name": "BaseBdev2", 00:24:59.917 "uuid": "3903a051-3d4b-4700-8048-97e3da598ccb", 00:24:59.917 "is_configured": true, 00:24:59.917 "data_offset": 0, 00:24:59.917 "data_size": 65536 00:24:59.917 }, 00:24:59.917 { 00:24:59.917 "name": "BaseBdev3", 00:24:59.917 "uuid": "6a138e96-264c-4d76-a2f6-18a18cc9d484", 00:24:59.917 "is_configured": true, 00:24:59.917 "data_offset": 0, 00:24:59.917 "data_size": 65536 00:24:59.917 }, 00:24:59.917 { 00:24:59.917 "name": "BaseBdev4", 00:24:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.917 "is_configured": false, 00:24:59.917 "data_offset": 0, 00:24:59.917 "data_size": 0 00:24:59.917 } 00:24:59.917 ] 00:24:59.917 }' 00:24:59.917 10:38:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.917 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:25:00.853 10:38:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:00.853 [2024-07-12 10:38:54.657300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:00.853 [2024-07-12 10:38:54.657355] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:00.853 [2024-07-12 10:38:54.657365] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:00.853 [2024-07-12 10:38:54.657480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:00.853 [2024-07-12 10:38:54.663025] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:00.853 [2024-07-12 10:38:54.663048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:00.853 [2024-07-12 10:38:54.663282] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.853 BaseBdev4 00:25:00.853 10:38:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:00.853 10:38:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:00.853 10:38:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:00.853 10:38:54 -- common/autotest_common.sh@889 -- # local i 00:25:00.853 10:38:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:00.853 10:38:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:00.853 10:38:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:01.112 10:38:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:01.371 [ 00:25:01.371 { 00:25:01.371 "name": "BaseBdev4", 00:25:01.371 "aliases": [ 00:25:01.371 "5135d2f4-706a-4f08-b1cf-518e90a939b0" 00:25:01.371 ], 00:25:01.371 "product_name": "Malloc disk", 00:25:01.371 "block_size": 512, 00:25:01.371 "num_blocks": 65536, 00:25:01.371 "uuid": "5135d2f4-706a-4f08-b1cf-518e90a939b0", 00:25:01.371 "assigned_rate_limits": { 00:25:01.371 "rw_ios_per_sec": 0, 00:25:01.371 "rw_mbytes_per_sec": 0, 00:25:01.371 "r_mbytes_per_sec": 0, 00:25:01.371 "w_mbytes_per_sec": 0 00:25:01.371 }, 00:25:01.371 "claimed": true, 00:25:01.371 "claim_type": "exclusive_write", 00:25:01.371 "zoned": false, 00:25:01.371 "supported_io_types": { 00:25:01.371 "read": true, 00:25:01.371 "write": true, 00:25:01.371 "unmap": true, 00:25:01.371 "write_zeroes": true, 00:25:01.371 "flush": true, 00:25:01.371 "reset": true, 00:25:01.371 "compare": false, 00:25:01.371 "compare_and_write": false, 00:25:01.371 "abort": true, 00:25:01.371 "nvme_admin": false, 00:25:01.371 "nvme_io": false 00:25:01.371 }, 00:25:01.371 "memory_domains": [ 00:25:01.371 { 00:25:01.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.371 "dma_device_type": 2 00:25:01.371 } 00:25:01.371 ], 00:25:01.371 "driver_specific": {} 00:25:01.371 } 00:25:01.371 ] 00:25:01.371 10:38:55 -- common/autotest_common.sh@895 -- # return 0 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.371 10:38:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.630 10:38:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.630 "name": "Existed_Raid", 00:25:01.630 "uuid": "7d5fd602-c6ff-42ae-a204-8d05cc9f973c", 00:25:01.630 "strip_size_kb": 64, 00:25:01.630 "state": "online", 00:25:01.630 "raid_level": "raid5f", 00:25:01.630 "superblock": false, 00:25:01.630 "num_base_bdevs": 4, 00:25:01.630 "num_base_bdevs_discovered": 4, 00:25:01.630 "num_base_bdevs_operational": 4, 00:25:01.630 "base_bdevs_list": [ 00:25:01.630 { 00:25:01.630 "name": "BaseBdev1", 00:25:01.630 "uuid": "f51b57df-c14a-45bb-98d2-488289cd7249", 00:25:01.630 "is_configured": true, 00:25:01.630 "data_offset": 0, 00:25:01.630 "data_size": 65536 00:25:01.630 }, 00:25:01.630 { 00:25:01.630 "name": "BaseBdev2", 00:25:01.630 "uuid": "3903a051-3d4b-4700-8048-97e3da598ccb", 00:25:01.630 "is_configured": true, 00:25:01.630 "data_offset": 0, 00:25:01.630 "data_size": 65536 00:25:01.630 }, 00:25:01.630 { 00:25:01.630 "name": "BaseBdev3", 00:25:01.630 "uuid": "6a138e96-264c-4d76-a2f6-18a18cc9d484", 00:25:01.630 "is_configured": true, 00:25:01.630 "data_offset": 0, 00:25:01.630 "data_size": 65536 00:25:01.630 }, 00:25:01.630 { 00:25:01.630 "name": "BaseBdev4", 00:25:01.630 "uuid": "5135d2f4-706a-4f08-b1cf-518e90a939b0", 00:25:01.630 "is_configured": true, 00:25:01.630 "data_offset": 0, 00:25:01.630 "data_size": 65536 00:25:01.630 } 00:25:01.630 ] 00:25:01.630 }' 00:25:01.630 10:38:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.630 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 10:38:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:02.455 [2024-07-12 10:38:56.251311] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.455 10:38:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.714 10:38:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:02.714 "name": "Existed_Raid", 00:25:02.714 "uuid": "7d5fd602-c6ff-42ae-a204-8d05cc9f973c", 00:25:02.714 "strip_size_kb": 64, 00:25:02.714 "state": "online", 00:25:02.714 "raid_level": "raid5f", 00:25:02.714 "superblock": false, 00:25:02.714 "num_base_bdevs": 4, 00:25:02.714 "num_base_bdevs_discovered": 3, 00:25:02.714 "num_base_bdevs_operational": 3, 00:25:02.714 "base_bdevs_list": [ 00:25:02.714 { 00:25:02.714 "name": null, 00:25:02.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.714 "is_configured": false, 00:25:02.714 "data_offset": 0, 00:25:02.714 "data_size": 65536 00:25:02.714 }, 00:25:02.714 { 00:25:02.714 "name": "BaseBdev2", 00:25:02.714 "uuid": "3903a051-3d4b-4700-8048-97e3da598ccb", 00:25:02.714 "is_configured": true, 00:25:02.714 "data_offset": 0, 00:25:02.714 "data_size": 65536 00:25:02.714 }, 00:25:02.714 { 00:25:02.714 "name": "BaseBdev3", 00:25:02.714 "uuid": "6a138e96-264c-4d76-a2f6-18a18cc9d484", 00:25:02.714 "is_configured": true, 00:25:02.714 "data_offset": 0, 00:25:02.714 "data_size": 65536 00:25:02.714 }, 00:25:02.714 { 00:25:02.714 "name": "BaseBdev4", 00:25:02.714 "uuid": "5135d2f4-706a-4f08-b1cf-518e90a939b0", 00:25:02.714 "is_configured": true, 00:25:02.714 "data_offset": 0, 00:25:02.714 "data_size": 65536 00:25:02.714 } 00:25:02.714 ] 00:25:02.714 }' 00:25:02.714 10:38:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:02.714 10:38:56 -- common/autotest_common.sh@10 -- # set +x 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:03.647 10:38:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:03.904 [2024-07-12 10:38:57.655695] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:03.904 [2024-07-12 10:38:57.655747] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.904 [2024-07-12 10:38:57.655833] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.904 10:38:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:03.904 10:38:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:03.904 10:38:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.904 10:38:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:04.162 10:38:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:04.162 10:38:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.162 10:38:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:04.420 [2024-07-12 10:38:58.117484] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:04.420 10:38:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:04.420 10:38:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:04.420 10:38:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.420 10:38:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:04.678 10:38:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:04.678 10:38:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.678 10:38:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:04.678 [2024-07-12 10:38:58.531725] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:04.678 [2024-07-12 10:38:58.531802] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:04.938 10:38:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:04.938 10:38:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:04.938 10:38:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.938 10:38:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:05.198 10:38:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:05.198 10:38:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:05.198 10:38:58 -- bdev/bdev_raid.sh@287 -- # killprocess 133326 00:25:05.198 10:38:58 -- common/autotest_common.sh@926 -- # '[' -z 133326 ']' 00:25:05.198 10:38:58 -- common/autotest_common.sh@930 -- # kill -0 133326 00:25:05.198 10:38:58 -- common/autotest_common.sh@931 -- # uname 00:25:05.198 10:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.198 10:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133326 00:25:05.198 killing process with pid 133326 00:25:05.198 10:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:05.198 10:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:05.198 10:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133326' 00:25:05.198 10:38:58 -- common/autotest_common.sh@945 -- # kill 133326 00:25:05.198 10:38:58 -- common/autotest_common.sh@950 -- # wait 133326 00:25:05.198 [2024-07-12 10:38:58.876723] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.198 [2024-07-12 10:38:58.876879] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:06.134 ************************************ 00:25:06.134 END TEST raid5f_state_function_test 00:25:06.134 ************************************ 00:25:06.134 10:38:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:06.134 00:25:06.134 real 0m13.315s 00:25:06.134 user 0m23.870s 00:25:06.134 sys 0m1.348s 00:25:06.134 10:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.134 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.134 10:38:59 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:25:06.134 10:38:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:06.134 10:38:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.134 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.134 ************************************ 00:25:06.134 START TEST raid5f_state_function_test_sb 00:25:06.134 ************************************ 00:25:06.134 10:39:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:06.134 10:39:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=133777 00:25:06.135 Process raid pid: 133777 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133777' 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133777 /var/tmp/spdk-raid.sock 00:25:06.135 10:39:00 -- common/autotest_common.sh@819 -- # '[' -z 133777 ']' 00:25:06.135 10:39:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:06.135 10:39:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:06.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:06.135 10:39:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:06.135 10:39:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:06.135 10:39:00 -- common/autotest_common.sh@10 -- # set +x 00:25:06.135 10:39:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:06.393 [2024-07-12 10:39:00.082895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:06.393 [2024-07-12 10:39:00.083294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.393 [2024-07-12 10:39:00.254477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.654 [2024-07-12 10:39:00.480505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.937 [2024-07-12 10:39:00.674034] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.215 10:39:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.215 10:39:00 -- common/autotest_common.sh@852 -- # return 0 00:25:07.215 10:39:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:07.215 [2024-07-12 10:39:01.101017] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.215 [2024-07-12 10:39:01.101108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.215 [2024-07-12 10:39:01.101121] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.215 [2024-07-12 10:39:01.101142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.215 [2024-07-12 10:39:01.101148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.215 [2024-07-12 10:39:01.101185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.215 [2024-07-12 10:39:01.101193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:07.215 [2024-07-12 10:39:01.101214] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.215 10:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.485 10:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.485 "name": "Existed_Raid", 00:25:07.485 "uuid": "0865555c-0bf3-4b61-af62-36107bf43751", 00:25:07.485 "strip_size_kb": 64, 00:25:07.485 "state": "configuring", 00:25:07.485 "raid_level": "raid5f", 00:25:07.485 "superblock": true, 00:25:07.485 "num_base_bdevs": 4, 00:25:07.485 "num_base_bdevs_discovered": 0, 00:25:07.485 "num_base_bdevs_operational": 4, 00:25:07.485 "base_bdevs_list": [ 00:25:07.485 { 00:25:07.485 "name": "BaseBdev1", 00:25:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.485 "is_configured": false, 00:25:07.485 "data_offset": 0, 00:25:07.485 "data_size": 0 00:25:07.485 }, 00:25:07.485 { 00:25:07.485 "name": "BaseBdev2", 00:25:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.485 "is_configured": false, 00:25:07.485 "data_offset": 0, 00:25:07.485 "data_size": 0 00:25:07.485 }, 00:25:07.485 { 00:25:07.485 "name": "BaseBdev3", 00:25:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.485 "is_configured": false, 00:25:07.485 "data_offset": 0, 00:25:07.485 "data_size": 0 00:25:07.485 }, 00:25:07.485 { 00:25:07.485 "name": "BaseBdev4", 00:25:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.485 "is_configured": false, 00:25:07.485 "data_offset": 0, 00:25:07.485 "data_size": 0 00:25:07.485 } 00:25:07.485 ] 00:25:07.485 }' 00:25:07.485 10:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.485 10:39:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.051 10:39:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:08.309 [2024-07-12 10:39:02.101040] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:08.310 [2024-07-12 10:39:02.101079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:08.310 10:39:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:08.568 [2024-07-12 10:39:02.345156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:08.568 [2024-07-12 10:39:02.345200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:08.568 [2024-07-12 10:39:02.345210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:08.568 [2024-07-12 10:39:02.345239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:08.568 [2024-07-12 10:39:02.345247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:08.568 [2024-07-12 10:39:02.345281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:08.568 [2024-07-12 10:39:02.345288] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:08.568 [2024-07-12 10:39:02.345309] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:08.568 10:39:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:08.826 [2024-07-12 10:39:02.568503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.826 BaseBdev1 00:25:08.826 10:39:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:08.826 10:39:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:08.826 10:39:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:08.826 10:39:02 -- common/autotest_common.sh@889 -- # local i 00:25:08.826 10:39:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:08.826 10:39:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:08.826 10:39:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.084 10:39:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:09.084 [ 00:25:09.084 { 00:25:09.084 "name": "BaseBdev1", 00:25:09.084 "aliases": [ 00:25:09.084 "7f97515e-bd8b-411c-9320-a2aa36a73c51" 00:25:09.084 ], 00:25:09.084 "product_name": "Malloc disk", 00:25:09.084 "block_size": 512, 00:25:09.084 "num_blocks": 65536, 00:25:09.084 "uuid": "7f97515e-bd8b-411c-9320-a2aa36a73c51", 00:25:09.084 "assigned_rate_limits": { 00:25:09.084 "rw_ios_per_sec": 0, 00:25:09.084 "rw_mbytes_per_sec": 0, 00:25:09.084 "r_mbytes_per_sec": 0, 00:25:09.085 "w_mbytes_per_sec": 0 00:25:09.085 }, 00:25:09.085 "claimed": true, 00:25:09.085 "claim_type": "exclusive_write", 00:25:09.085 "zoned": false, 00:25:09.085 "supported_io_types": { 00:25:09.085 "read": true, 00:25:09.085 "write": true, 00:25:09.085 "unmap": true, 00:25:09.085 "write_zeroes": true, 00:25:09.085 "flush": true, 00:25:09.085 "reset": true, 00:25:09.085 "compare": false, 00:25:09.085 "compare_and_write": false, 00:25:09.085 "abort": true, 00:25:09.085 "nvme_admin": false, 00:25:09.085 "nvme_io": false 00:25:09.085 }, 00:25:09.085 "memory_domains": [ 00:25:09.085 { 00:25:09.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.085 "dma_device_type": 2 00:25:09.085 } 00:25:09.085 ], 00:25:09.085 "driver_specific": {} 00:25:09.085 } 00:25:09.085 ] 00:25:09.085 10:39:02 -- common/autotest_common.sh@895 -- # return 0 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.085 10:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.342 10:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.342 "name": "Existed_Raid", 00:25:09.342 "uuid": "02aa5cc6-6260-4155-b793-3b57f12d4070", 00:25:09.342 "strip_size_kb": 64, 00:25:09.342 "state": "configuring", 00:25:09.342 "raid_level": "raid5f", 00:25:09.342 "superblock": true, 00:25:09.342 "num_base_bdevs": 4, 00:25:09.342 "num_base_bdevs_discovered": 1, 00:25:09.342 "num_base_bdevs_operational": 4, 00:25:09.342 "base_bdevs_list": [ 00:25:09.342 { 00:25:09.342 "name": "BaseBdev1", 00:25:09.342 "uuid": "7f97515e-bd8b-411c-9320-a2aa36a73c51", 00:25:09.342 "is_configured": true, 00:25:09.342 "data_offset": 2048, 00:25:09.342 "data_size": 63488 00:25:09.342 }, 00:25:09.342 { 00:25:09.342 "name": "BaseBdev2", 00:25:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.342 "is_configured": false, 00:25:09.342 "data_offset": 0, 00:25:09.342 "data_size": 0 00:25:09.342 }, 00:25:09.342 { 00:25:09.342 "name": "BaseBdev3", 00:25:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.342 "is_configured": false, 00:25:09.342 "data_offset": 0, 00:25:09.342 "data_size": 0 00:25:09.342 }, 00:25:09.342 { 00:25:09.342 "name": "BaseBdev4", 00:25:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.342 "is_configured": false, 00:25:09.342 "data_offset": 0, 00:25:09.342 "data_size": 0 00:25:09.342 } 00:25:09.342 ] 00:25:09.342 }' 00:25:09.342 10:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.342 10:39:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.274 10:39:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:10.274 [2024-07-12 10:39:03.996737] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:10.274 [2024-07-12 10:39:03.996783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:10.274 10:39:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:10.274 10:39:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:10.532 10:39:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.791 BaseBdev1 00:25:10.791 10:39:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:10.791 10:39:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:10.791 10:39:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:10.791 10:39:04 -- common/autotest_common.sh@889 -- # local i 00:25:10.791 10:39:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:10.791 10:39:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:10.791 10:39:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:11.049 10:39:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:11.049 [ 00:25:11.049 { 00:25:11.049 "name": "BaseBdev1", 00:25:11.049 "aliases": [ 00:25:11.049 "bff74ccd-b509-4e53-a8ea-80f3f443cd3a" 00:25:11.049 ], 00:25:11.049 "product_name": "Malloc disk", 00:25:11.049 "block_size": 512, 00:25:11.049 "num_blocks": 65536, 00:25:11.049 "uuid": "bff74ccd-b509-4e53-a8ea-80f3f443cd3a", 00:25:11.049 "assigned_rate_limits": { 00:25:11.049 "rw_ios_per_sec": 0, 00:25:11.049 "rw_mbytes_per_sec": 0, 00:25:11.049 "r_mbytes_per_sec": 0, 00:25:11.049 "w_mbytes_per_sec": 0 00:25:11.049 }, 00:25:11.049 "claimed": false, 00:25:11.049 "zoned": false, 00:25:11.049 "supported_io_types": { 00:25:11.049 "read": true, 00:25:11.049 "write": true, 00:25:11.049 "unmap": true, 00:25:11.049 "write_zeroes": true, 00:25:11.049 "flush": true, 00:25:11.049 "reset": true, 00:25:11.049 "compare": false, 00:25:11.049 "compare_and_write": false, 00:25:11.050 "abort": true, 00:25:11.050 "nvme_admin": false, 00:25:11.050 "nvme_io": false 00:25:11.050 }, 00:25:11.050 "memory_domains": [ 00:25:11.050 { 00:25:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.050 "dma_device_type": 2 00:25:11.050 } 00:25:11.050 ], 00:25:11.050 "driver_specific": {} 00:25:11.050 } 00:25:11.050 ] 00:25:11.050 10:39:04 -- common/autotest_common.sh@895 -- # return 0 00:25:11.050 10:39:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:11.308 [2024-07-12 10:39:05.104159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.308 [2024-07-12 10:39:05.106088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.308 [2024-07-12 10:39:05.106158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.308 [2024-07-12 10:39:05.106170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:11.308 [2024-07-12 10:39:05.106195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:11.308 [2024-07-12 10:39:05.106203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:11.308 [2024-07-12 10:39:05.106218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.308 10:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.565 10:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.565 "name": "Existed_Raid", 00:25:11.565 "uuid": "18d15c15-fef2-49f7-a0d5-85fb8ed87df8", 00:25:11.565 "strip_size_kb": 64, 00:25:11.565 "state": "configuring", 00:25:11.565 "raid_level": "raid5f", 00:25:11.565 "superblock": true, 00:25:11.565 "num_base_bdevs": 4, 00:25:11.565 "num_base_bdevs_discovered": 1, 00:25:11.565 "num_base_bdevs_operational": 4, 00:25:11.565 "base_bdevs_list": [ 00:25:11.565 { 00:25:11.565 "name": "BaseBdev1", 00:25:11.565 "uuid": "bff74ccd-b509-4e53-a8ea-80f3f443cd3a", 00:25:11.565 "is_configured": true, 00:25:11.565 "data_offset": 2048, 00:25:11.565 "data_size": 63488 00:25:11.565 }, 00:25:11.565 { 00:25:11.565 "name": "BaseBdev2", 00:25:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.565 "is_configured": false, 00:25:11.565 "data_offset": 0, 00:25:11.565 "data_size": 0 00:25:11.565 }, 00:25:11.565 { 00:25:11.565 "name": "BaseBdev3", 00:25:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.565 "is_configured": false, 00:25:11.565 "data_offset": 0, 00:25:11.565 "data_size": 0 00:25:11.565 }, 00:25:11.565 { 00:25:11.565 "name": "BaseBdev4", 00:25:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.565 "is_configured": false, 00:25:11.565 "data_offset": 0, 00:25:11.565 "data_size": 0 00:25:11.565 } 00:25:11.565 ] 00:25:11.565 }' 00:25:11.565 10:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.565 10:39:05 -- common/autotest_common.sh@10 -- # set +x 00:25:12.128 10:39:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:12.386 [2024-07-12 10:39:06.181340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.386 BaseBdev2 00:25:12.386 10:39:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:12.386 10:39:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:12.386 10:39:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:12.386 10:39:06 -- common/autotest_common.sh@889 -- # local i 00:25:12.386 10:39:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:12.386 10:39:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:12.386 10:39:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.643 10:39:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.900 [ 00:25:12.900 { 00:25:12.900 "name": "BaseBdev2", 00:25:12.900 "aliases": [ 00:25:12.900 "f1a94d21-14b1-43a3-8a72-88d49303e3ba" 00:25:12.900 ], 00:25:12.900 "product_name": "Malloc disk", 00:25:12.900 "block_size": 512, 00:25:12.900 "num_blocks": 65536, 00:25:12.900 "uuid": "f1a94d21-14b1-43a3-8a72-88d49303e3ba", 00:25:12.900 "assigned_rate_limits": { 00:25:12.900 "rw_ios_per_sec": 0, 00:25:12.900 "rw_mbytes_per_sec": 0, 00:25:12.901 "r_mbytes_per_sec": 0, 00:25:12.901 "w_mbytes_per_sec": 0 00:25:12.901 }, 00:25:12.901 "claimed": true, 00:25:12.901 "claim_type": "exclusive_write", 00:25:12.901 "zoned": false, 00:25:12.901 "supported_io_types": { 00:25:12.901 "read": true, 00:25:12.901 "write": true, 00:25:12.901 "unmap": true, 00:25:12.901 "write_zeroes": true, 00:25:12.901 "flush": true, 00:25:12.901 "reset": true, 00:25:12.901 "compare": false, 00:25:12.901 "compare_and_write": false, 00:25:12.901 "abort": true, 00:25:12.901 "nvme_admin": false, 00:25:12.901 "nvme_io": false 00:25:12.901 }, 00:25:12.901 "memory_domains": [ 00:25:12.901 { 00:25:12.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.901 "dma_device_type": 2 00:25:12.901 } 00:25:12.901 ], 00:25:12.901 "driver_specific": {} 00:25:12.901 } 00:25:12.901 ] 00:25:12.901 10:39:06 -- common/autotest_common.sh@895 -- # return 0 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.901 "name": "Existed_Raid", 00:25:12.901 "uuid": "18d15c15-fef2-49f7-a0d5-85fb8ed87df8", 00:25:12.901 "strip_size_kb": 64, 00:25:12.901 "state": "configuring", 00:25:12.901 "raid_level": "raid5f", 00:25:12.901 "superblock": true, 00:25:12.901 "num_base_bdevs": 4, 00:25:12.901 "num_base_bdevs_discovered": 2, 00:25:12.901 "num_base_bdevs_operational": 4, 00:25:12.901 "base_bdevs_list": [ 00:25:12.901 { 00:25:12.901 "name": "BaseBdev1", 00:25:12.901 "uuid": "bff74ccd-b509-4e53-a8ea-80f3f443cd3a", 00:25:12.901 "is_configured": true, 00:25:12.901 "data_offset": 2048, 00:25:12.901 "data_size": 63488 00:25:12.901 }, 00:25:12.901 { 00:25:12.901 "name": "BaseBdev2", 00:25:12.901 "uuid": "f1a94d21-14b1-43a3-8a72-88d49303e3ba", 00:25:12.901 "is_configured": true, 00:25:12.901 "data_offset": 2048, 00:25:12.901 "data_size": 63488 00:25:12.901 }, 00:25:12.901 { 00:25:12.901 "name": "BaseBdev3", 00:25:12.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.901 "is_configured": false, 00:25:12.901 "data_offset": 0, 00:25:12.901 "data_size": 0 00:25:12.901 }, 00:25:12.901 { 00:25:12.901 "name": "BaseBdev4", 00:25:12.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.901 "is_configured": false, 00:25:12.901 "data_offset": 0, 00:25:12.901 "data_size": 0 00:25:12.901 } 00:25:12.901 ] 00:25:12.901 }' 00:25:12.901 10:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.901 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:25:13.835 10:39:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:13.835 [2024-07-12 10:39:07.737115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.835 BaseBdev3 00:25:14.092 10:39:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:14.092 10:39:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:14.092 10:39:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:14.092 10:39:07 -- common/autotest_common.sh@889 -- # local i 00:25:14.092 10:39:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:14.092 10:39:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:14.092 10:39:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:14.092 10:39:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:14.350 [ 00:25:14.350 { 00:25:14.350 "name": "BaseBdev3", 00:25:14.350 "aliases": [ 00:25:14.350 "1ff571c5-c8fc-403b-b42b-3e28f2cac4e6" 00:25:14.350 ], 00:25:14.350 "product_name": "Malloc disk", 00:25:14.350 "block_size": 512, 00:25:14.350 "num_blocks": 65536, 00:25:14.350 "uuid": "1ff571c5-c8fc-403b-b42b-3e28f2cac4e6", 00:25:14.350 "assigned_rate_limits": { 00:25:14.350 "rw_ios_per_sec": 0, 00:25:14.350 "rw_mbytes_per_sec": 0, 00:25:14.350 "r_mbytes_per_sec": 0, 00:25:14.350 "w_mbytes_per_sec": 0 00:25:14.350 }, 00:25:14.350 "claimed": true, 00:25:14.350 "claim_type": "exclusive_write", 00:25:14.350 "zoned": false, 00:25:14.350 "supported_io_types": { 00:25:14.350 "read": true, 00:25:14.350 "write": true, 00:25:14.350 "unmap": true, 00:25:14.350 "write_zeroes": true, 00:25:14.350 "flush": true, 00:25:14.350 "reset": true, 00:25:14.350 "compare": false, 00:25:14.350 "compare_and_write": false, 00:25:14.350 "abort": true, 00:25:14.350 "nvme_admin": false, 00:25:14.350 "nvme_io": false 00:25:14.350 }, 00:25:14.350 "memory_domains": [ 00:25:14.350 { 00:25:14.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.350 "dma_device_type": 2 00:25:14.350 } 00:25:14.350 ], 00:25:14.350 "driver_specific": {} 00:25:14.350 } 00:25:14.350 ] 00:25:14.350 10:39:08 -- common/autotest_common.sh@895 -- # return 0 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.350 10:39:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.608 10:39:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:14.608 "name": "Existed_Raid", 00:25:14.608 "uuid": "18d15c15-fef2-49f7-a0d5-85fb8ed87df8", 00:25:14.608 "strip_size_kb": 64, 00:25:14.608 "state": "configuring", 00:25:14.608 "raid_level": "raid5f", 00:25:14.608 "superblock": true, 00:25:14.608 "num_base_bdevs": 4, 00:25:14.608 "num_base_bdevs_discovered": 3, 00:25:14.608 "num_base_bdevs_operational": 4, 00:25:14.608 "base_bdevs_list": [ 00:25:14.608 { 00:25:14.608 "name": "BaseBdev1", 00:25:14.608 "uuid": "bff74ccd-b509-4e53-a8ea-80f3f443cd3a", 00:25:14.608 "is_configured": true, 00:25:14.608 "data_offset": 2048, 00:25:14.608 "data_size": 63488 00:25:14.608 }, 00:25:14.608 { 00:25:14.608 "name": "BaseBdev2", 00:25:14.608 "uuid": "f1a94d21-14b1-43a3-8a72-88d49303e3ba", 00:25:14.608 "is_configured": true, 00:25:14.608 "data_offset": 2048, 00:25:14.608 "data_size": 63488 00:25:14.608 }, 00:25:14.608 { 00:25:14.608 "name": "BaseBdev3", 00:25:14.608 "uuid": "1ff571c5-c8fc-403b-b42b-3e28f2cac4e6", 00:25:14.608 "is_configured": true, 00:25:14.608 "data_offset": 2048, 00:25:14.608 "data_size": 63488 00:25:14.608 }, 00:25:14.608 { 00:25:14.608 "name": "BaseBdev4", 00:25:14.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.608 "is_configured": false, 00:25:14.608 "data_offset": 0, 00:25:14.608 "data_size": 0 00:25:14.608 } 00:25:14.608 ] 00:25:14.608 }' 00:25:14.608 10:39:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:14.608 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:25:15.174 10:39:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:15.432 [2024-07-12 10:39:09.280889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:15.432 [2024-07-12 10:39:09.281118] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:25:15.432 [2024-07-12 10:39:09.281132] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:15.432 BaseBdev4 00:25:15.432 [2024-07-12 10:39:09.281276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:15.432 [2024-07-12 10:39:09.286801] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:25:15.432 [2024-07-12 10:39:09.286825] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:25:15.432 [2024-07-12 10:39:09.286983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.432 10:39:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:15.432 10:39:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:15.432 10:39:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:15.432 10:39:09 -- common/autotest_common.sh@889 -- # local i 00:25:15.432 10:39:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:15.432 10:39:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:15.432 10:39:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:15.691 10:39:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:15.949 [ 00:25:15.949 { 00:25:15.949 "name": "BaseBdev4", 00:25:15.949 "aliases": [ 00:25:15.949 "87e36a0a-531c-46a9-9763-421863d7b1e7" 00:25:15.949 ], 00:25:15.949 "product_name": "Malloc disk", 00:25:15.949 "block_size": 512, 00:25:15.949 "num_blocks": 65536, 00:25:15.949 "uuid": "87e36a0a-531c-46a9-9763-421863d7b1e7", 00:25:15.949 "assigned_rate_limits": { 00:25:15.949 "rw_ios_per_sec": 0, 00:25:15.949 "rw_mbytes_per_sec": 0, 00:25:15.949 "r_mbytes_per_sec": 0, 00:25:15.949 "w_mbytes_per_sec": 0 00:25:15.949 }, 00:25:15.949 "claimed": true, 00:25:15.949 "claim_type": "exclusive_write", 00:25:15.949 "zoned": false, 00:25:15.949 "supported_io_types": { 00:25:15.949 "read": true, 00:25:15.949 "write": true, 00:25:15.949 "unmap": true, 00:25:15.949 "write_zeroes": true, 00:25:15.949 "flush": true, 00:25:15.949 "reset": true, 00:25:15.949 "compare": false, 00:25:15.949 "compare_and_write": false, 00:25:15.949 "abort": true, 00:25:15.949 "nvme_admin": false, 00:25:15.949 "nvme_io": false 00:25:15.949 }, 00:25:15.949 "memory_domains": [ 00:25:15.949 { 00:25:15.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.949 "dma_device_type": 2 00:25:15.949 } 00:25:15.949 ], 00:25:15.949 "driver_specific": {} 00:25:15.949 } 00:25:15.949 ] 00:25:15.949 10:39:09 -- common/autotest_common.sh@895 -- # return 0 00:25:15.949 10:39:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:15.949 10:39:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:15.949 10:39:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:15.949 10:39:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:15.949 10:39:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.950 10:39:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.207 10:39:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.207 "name": "Existed_Raid", 00:25:16.207 "uuid": "18d15c15-fef2-49f7-a0d5-85fb8ed87df8", 00:25:16.207 "strip_size_kb": 64, 00:25:16.208 "state": "online", 00:25:16.208 "raid_level": "raid5f", 00:25:16.208 "superblock": true, 00:25:16.208 "num_base_bdevs": 4, 00:25:16.208 "num_base_bdevs_discovered": 4, 00:25:16.208 "num_base_bdevs_operational": 4, 00:25:16.208 "base_bdevs_list": [ 00:25:16.208 { 00:25:16.208 "name": "BaseBdev1", 00:25:16.208 "uuid": "bff74ccd-b509-4e53-a8ea-80f3f443cd3a", 00:25:16.208 "is_configured": true, 00:25:16.208 "data_offset": 2048, 00:25:16.208 "data_size": 63488 00:25:16.208 }, 00:25:16.208 { 00:25:16.208 "name": "BaseBdev2", 00:25:16.208 "uuid": "f1a94d21-14b1-43a3-8a72-88d49303e3ba", 00:25:16.208 "is_configured": true, 00:25:16.208 "data_offset": 2048, 00:25:16.208 "data_size": 63488 00:25:16.208 }, 00:25:16.208 { 00:25:16.208 "name": "BaseBdev3", 00:25:16.208 "uuid": "1ff571c5-c8fc-403b-b42b-3e28f2cac4e6", 00:25:16.208 "is_configured": true, 00:25:16.208 "data_offset": 2048, 00:25:16.208 "data_size": 63488 00:25:16.208 }, 00:25:16.208 { 00:25:16.208 "name": "BaseBdev4", 00:25:16.208 "uuid": "87e36a0a-531c-46a9-9763-421863d7b1e7", 00:25:16.208 "is_configured": true, 00:25:16.208 "data_offset": 2048, 00:25:16.208 "data_size": 63488 00:25:16.208 } 00:25:16.208 ] 00:25:16.208 }' 00:25:16.208 10:39:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.208 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:25:16.774 10:39:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:17.033 [2024-07-12 10:39:10.897012] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.290 10:39:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.548 10:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:17.548 "name": "Existed_Raid", 00:25:17.548 "uuid": "18d15c15-fef2-49f7-a0d5-85fb8ed87df8", 00:25:17.548 "strip_size_kb": 64, 00:25:17.548 "state": "online", 00:25:17.548 "raid_level": "raid5f", 00:25:17.548 "superblock": true, 00:25:17.548 "num_base_bdevs": 4, 00:25:17.548 "num_base_bdevs_discovered": 3, 00:25:17.548 "num_base_bdevs_operational": 3, 00:25:17.548 "base_bdevs_list": [ 00:25:17.548 { 00:25:17.548 "name": null, 00:25:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.548 "is_configured": false, 00:25:17.548 "data_offset": 2048, 00:25:17.548 "data_size": 63488 00:25:17.548 }, 00:25:17.548 { 00:25:17.548 "name": "BaseBdev2", 00:25:17.548 "uuid": "f1a94d21-14b1-43a3-8a72-88d49303e3ba", 00:25:17.548 "is_configured": true, 00:25:17.548 "data_offset": 2048, 00:25:17.548 "data_size": 63488 00:25:17.548 }, 00:25:17.548 { 00:25:17.548 "name": "BaseBdev3", 00:25:17.548 "uuid": "1ff571c5-c8fc-403b-b42b-3e28f2cac4e6", 00:25:17.548 "is_configured": true, 00:25:17.548 "data_offset": 2048, 00:25:17.548 "data_size": 63488 00:25:17.548 }, 00:25:17.548 { 00:25:17.548 "name": "BaseBdev4", 00:25:17.548 "uuid": "87e36a0a-531c-46a9-9763-421863d7b1e7", 00:25:17.548 "is_configured": true, 00:25:17.548 "data_offset": 2048, 00:25:17.548 "data_size": 63488 00:25:17.548 } 00:25:17.548 ] 00:25:17.548 }' 00:25:17.548 10:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:17.548 10:39:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.116 10:39:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:18.116 10:39:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:18.116 10:39:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.116 10:39:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:18.374 10:39:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:18.374 10:39:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:18.374 10:39:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:18.633 [2024-07-12 10:39:12.335740] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:18.633 [2024-07-12 10:39:12.335778] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.633 [2024-07-12 10:39:12.335846] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.633 10:39:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:18.633 10:39:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:18.633 10:39:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.633 10:39:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:18.892 10:39:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:18.892 10:39:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:18.892 10:39:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:19.150 [2024-07-12 10:39:12.823675] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:19.150 10:39:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:19.150 10:39:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:19.150 10:39:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.150 10:39:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:19.408 10:39:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:19.408 10:39:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:19.408 10:39:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:19.408 [2024-07-12 10:39:13.310349] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:19.408 [2024-07-12 10:39:13.310407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:19.665 10:39:13 -- bdev/bdev_raid.sh@287 -- # killprocess 133777 00:25:19.665 10:39:13 -- common/autotest_common.sh@926 -- # '[' -z 133777 ']' 00:25:19.665 10:39:13 -- common/autotest_common.sh@930 -- # kill -0 133777 00:25:19.665 10:39:13 -- common/autotest_common.sh@931 -- # uname 00:25:19.665 10:39:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:19.665 10:39:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133777 00:25:19.924 killing process with pid 133777 00:25:19.924 10:39:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:19.924 10:39:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:19.924 10:39:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133777' 00:25:19.924 10:39:13 -- common/autotest_common.sh@945 -- # kill 133777 00:25:19.924 10:39:13 -- common/autotest_common.sh@950 -- # wait 133777 00:25:19.924 [2024-07-12 10:39:13.588476] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:19.924 [2024-07-12 10:39:13.588592] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:20.860 ************************************ 00:25:20.860 END TEST raid5f_state_function_test_sb 00:25:20.860 ************************************ 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:20.860 00:25:20.860 real 0m14.583s 00:25:20.860 user 0m26.296s 00:25:20.860 sys 0m1.487s 00:25:20.860 10:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.860 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:25:20.860 10:39:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:20.860 10:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:20.860 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.860 ************************************ 00:25:20.860 START TEST raid5f_superblock_test 00:25:20.860 ************************************ 00:25:20.860 10:39:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=134239 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134239 /var/tmp/spdk-raid.sock 00:25:20.860 10:39:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:20.860 10:39:14 -- common/autotest_common.sh@819 -- # '[' -z 134239 ']' 00:25:20.860 10:39:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:20.860 10:39:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:20.860 10:39:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:20.860 10:39:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.860 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.860 [2024-07-12 10:39:14.732942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:20.860 [2024-07-12 10:39:14.733150] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134239 ] 00:25:21.119 [2024-07-12 10:39:14.895196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.377 [2024-07-12 10:39:15.084250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.377 [2024-07-12 10:39:15.267810] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:21.943 10:39:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:21.943 10:39:15 -- common/autotest_common.sh@852 -- # return 0 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:21.943 10:39:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:22.201 malloc1 00:25:22.201 10:39:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:22.201 [2024-07-12 10:39:16.110345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:22.202 [2024-07-12 10:39:16.110432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.202 [2024-07-12 10:39:16.110469] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:22.202 [2024-07-12 10:39:16.110517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.202 [2024-07-12 10:39:16.112987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.202 [2024-07-12 10:39:16.113051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:22.202 pt1 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:22.459 10:39:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:22.459 malloc2 00:25:22.460 10:39:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:22.718 [2024-07-12 10:39:16.503056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:22.718 [2024-07-12 10:39:16.503120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.718 [2024-07-12 10:39:16.503161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:22.718 [2024-07-12 10:39:16.503213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.718 [2024-07-12 10:39:16.505368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.718 [2024-07-12 10:39:16.505413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:22.718 pt2 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:22.718 10:39:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:22.976 malloc3 00:25:22.976 10:39:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:22.976 [2024-07-12 10:39:16.880247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:22.976 [2024-07-12 10:39:16.880310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.976 [2024-07-12 10:39:16.880347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:22.976 [2024-07-12 10:39:16.880392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.976 [2024-07-12 10:39:16.882533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.976 [2024-07-12 10:39:16.882583] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:22.976 pt3 00:25:23.234 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:23.234 10:39:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:23.234 10:39:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:23.235 10:39:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:23.493 malloc4 00:25:23.493 10:39:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:23.493 [2024-07-12 10:39:17.393619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:23.493 [2024-07-12 10:39:17.393701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.493 [2024-07-12 10:39:17.393742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:23.493 [2024-07-12 10:39:17.393783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.493 [2024-07-12 10:39:17.395923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.493 [2024-07-12 10:39:17.395972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:23.493 pt4 00:25:23.493 10:39:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:23.493 10:39:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:23.493 10:39:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:23.751 [2024-07-12 10:39:17.573714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:23.751 [2024-07-12 10:39:17.575546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:23.751 [2024-07-12 10:39:17.575614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:23.751 [2024-07-12 10:39:17.575690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:23.751 [2024-07-12 10:39:17.575881] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:23.751 [2024-07-12 10:39:17.575899] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:23.751 [2024-07-12 10:39:17.576008] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:23.751 [2024-07-12 10:39:17.581392] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:23.751 [2024-07-12 10:39:17.581414] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:23.751 [2024-07-12 10:39:17.581571] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.751 10:39:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:23.751 10:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.752 10:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.010 10:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.010 "name": "raid_bdev1", 00:25:24.010 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:24.010 "strip_size_kb": 64, 00:25:24.010 "state": "online", 00:25:24.010 "raid_level": "raid5f", 00:25:24.010 "superblock": true, 00:25:24.010 "num_base_bdevs": 4, 00:25:24.010 "num_base_bdevs_discovered": 4, 00:25:24.010 "num_base_bdevs_operational": 4, 00:25:24.010 "base_bdevs_list": [ 00:25:24.010 { 00:25:24.010 "name": "pt1", 00:25:24.010 "uuid": "c49f7efb-7c3c-583e-80b1-de63fc113bf7", 00:25:24.010 "is_configured": true, 00:25:24.010 "data_offset": 2048, 00:25:24.010 "data_size": 63488 00:25:24.010 }, 00:25:24.010 { 00:25:24.010 "name": "pt2", 00:25:24.010 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:24.010 "is_configured": true, 00:25:24.010 "data_offset": 2048, 00:25:24.010 "data_size": 63488 00:25:24.010 }, 00:25:24.010 { 00:25:24.010 "name": "pt3", 00:25:24.010 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:24.010 "is_configured": true, 00:25:24.010 "data_offset": 2048, 00:25:24.010 "data_size": 63488 00:25:24.010 }, 00:25:24.010 { 00:25:24.010 "name": "pt4", 00:25:24.010 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:24.010 "is_configured": true, 00:25:24.010 "data_offset": 2048, 00:25:24.010 "data_size": 63488 00:25:24.010 } 00:25:24.010 ] 00:25:24.010 }' 00:25:24.010 10:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.010 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.575 10:39:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:24.575 10:39:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:24.833 [2024-07-12 10:39:18.600020] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:24.833 10:39:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d89b213b-e860-4f62-a35d-f4ff78790cca 00:25:24.834 10:39:18 -- bdev/bdev_raid.sh@380 -- # '[' -z d89b213b-e860-4f62-a35d-f4ff78790cca ']' 00:25:24.834 10:39:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:25.092 [2024-07-12 10:39:18.847938] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:25.092 [2024-07-12 10:39:18.847962] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.092 [2024-07-12 10:39:18.848030] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.092 [2024-07-12 10:39:18.848098] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.092 [2024-07-12 10:39:18.848108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:25.092 10:39:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.092 10:39:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:25.350 10:39:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:25.607 10:39:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:25.607 10:39:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:25.865 10:39:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:25.865 10:39:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:26.123 10:39:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:26.123 10:39:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:26.124 10:39:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:26.124 10:39:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:26.124 10:39:19 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.124 10:39:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:26.124 10:39:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.124 10:39:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.124 10:39:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.124 10:39:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.124 10:39:20 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.124 10:39:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.124 10:39:20 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.124 10:39:20 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:26.124 10:39:20 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:26.381 [2024-07-12 10:39:20.172139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:26.381 [2024-07-12 10:39:20.173947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:26.381 [2024-07-12 10:39:20.174023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:26.381 [2024-07-12 10:39:20.174066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:26.381 [2024-07-12 10:39:20.174110] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:26.381 [2024-07-12 10:39:20.174168] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:26.381 [2024-07-12 10:39:20.174201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:26.382 [2024-07-12 10:39:20.174256] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:26.382 [2024-07-12 10:39:20.174281] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.382 [2024-07-12 10:39:20.174290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:25:26.382 request: 00:25:26.382 { 00:25:26.382 "name": "raid_bdev1", 00:25:26.382 "raid_level": "raid5f", 00:25:26.382 "base_bdevs": [ 00:25:26.382 "malloc1", 00:25:26.382 "malloc2", 00:25:26.382 "malloc3", 00:25:26.382 "malloc4" 00:25:26.382 ], 00:25:26.382 "superblock": false, 00:25:26.382 "strip_size_kb": 64, 00:25:26.382 "method": "bdev_raid_create", 00:25:26.382 "req_id": 1 00:25:26.382 } 00:25:26.382 Got JSON-RPC error response 00:25:26.382 response: 00:25:26.382 { 00:25:26.382 "code": -17, 00:25:26.382 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:26.382 } 00:25:26.382 10:39:20 -- common/autotest_common.sh@643 -- # es=1 00:25:26.382 10:39:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.382 10:39:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:26.382 10:39:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.382 10:39:20 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.382 10:39:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:26.640 10:39:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:26.640 10:39:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:26.640 10:39:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:26.898 [2024-07-12 10:39:20.599233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:26.898 [2024-07-12 10:39:20.599316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.898 [2024-07-12 10:39:20.599373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:26.898 [2024-07-12 10:39:20.599411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.898 [2024-07-12 10:39:20.602226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.898 [2024-07-12 10:39:20.602317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:26.898 [2024-07-12 10:39:20.602446] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:26.898 [2024-07-12 10:39:20.602523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:26.898 pt1 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.898 10:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.156 10:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.156 "name": "raid_bdev1", 00:25:27.156 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:27.156 "strip_size_kb": 64, 00:25:27.156 "state": "configuring", 00:25:27.156 "raid_level": "raid5f", 00:25:27.156 "superblock": true, 00:25:27.156 "num_base_bdevs": 4, 00:25:27.156 "num_base_bdevs_discovered": 1, 00:25:27.156 "num_base_bdevs_operational": 4, 00:25:27.156 "base_bdevs_list": [ 00:25:27.156 { 00:25:27.156 "name": "pt1", 00:25:27.156 "uuid": "c49f7efb-7c3c-583e-80b1-de63fc113bf7", 00:25:27.156 "is_configured": true, 00:25:27.156 "data_offset": 2048, 00:25:27.156 "data_size": 63488 00:25:27.156 }, 00:25:27.156 { 00:25:27.156 "name": null, 00:25:27.156 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:27.156 "is_configured": false, 00:25:27.156 "data_offset": 2048, 00:25:27.156 "data_size": 63488 00:25:27.156 }, 00:25:27.156 { 00:25:27.156 "name": null, 00:25:27.156 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:27.156 "is_configured": false, 00:25:27.156 "data_offset": 2048, 00:25:27.156 "data_size": 63488 00:25:27.156 }, 00:25:27.156 { 00:25:27.156 "name": null, 00:25:27.156 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:27.156 "is_configured": false, 00:25:27.156 "data_offset": 2048, 00:25:27.156 "data_size": 63488 00:25:27.156 } 00:25:27.156 ] 00:25:27.156 }' 00:25:27.156 10:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.156 10:39:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.722 10:39:21 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:27.722 10:39:21 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.980 [2024-07-12 10:39:21.739378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.981 [2024-07-12 10:39:21.739430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.981 [2024-07-12 10:39:21.739464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:27.981 [2024-07-12 10:39:21.739483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.981 [2024-07-12 10:39:21.739843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.981 [2024-07-12 10:39:21.739892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.981 [2024-07-12 10:39:21.739971] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:27.981 [2024-07-12 10:39:21.739994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.981 pt2 00:25:27.981 10:39:21 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:28.239 [2024-07-12 10:39:21.987417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.239 10:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.497 10:39:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.497 "name": "raid_bdev1", 00:25:28.497 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:28.497 "strip_size_kb": 64, 00:25:28.497 "state": "configuring", 00:25:28.497 "raid_level": "raid5f", 00:25:28.497 "superblock": true, 00:25:28.497 "num_base_bdevs": 4, 00:25:28.497 "num_base_bdevs_discovered": 1, 00:25:28.497 "num_base_bdevs_operational": 4, 00:25:28.497 "base_bdevs_list": [ 00:25:28.497 { 00:25:28.497 "name": "pt1", 00:25:28.497 "uuid": "c49f7efb-7c3c-583e-80b1-de63fc113bf7", 00:25:28.497 "is_configured": true, 00:25:28.497 "data_offset": 2048, 00:25:28.497 "data_size": 63488 00:25:28.497 }, 00:25:28.497 { 00:25:28.497 "name": null, 00:25:28.497 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:28.497 "is_configured": false, 00:25:28.497 "data_offset": 2048, 00:25:28.497 "data_size": 63488 00:25:28.497 }, 00:25:28.497 { 00:25:28.497 "name": null, 00:25:28.497 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:28.497 "is_configured": false, 00:25:28.497 "data_offset": 2048, 00:25:28.497 "data_size": 63488 00:25:28.497 }, 00:25:28.497 { 00:25:28.497 "name": null, 00:25:28.497 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:28.497 "is_configured": false, 00:25:28.497 "data_offset": 2048, 00:25:28.497 "data_size": 63488 00:25:28.497 } 00:25:28.497 ] 00:25:28.497 }' 00:25:28.497 10:39:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.497 10:39:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.064 10:39:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:29.064 10:39:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:29.064 10:39:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:29.321 [2024-07-12 10:39:23.103651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:29.321 [2024-07-12 10:39:23.103706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.321 [2024-07-12 10:39:23.103738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:29.321 [2024-07-12 10:39:23.103756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.321 [2024-07-12 10:39:23.104099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.321 [2024-07-12 10:39:23.104156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:29.321 [2024-07-12 10:39:23.104230] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:29.321 [2024-07-12 10:39:23.104251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:29.321 pt2 00:25:29.321 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:29.321 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:29.321 10:39:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:29.578 [2024-07-12 10:39:23.323720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:29.578 [2024-07-12 10:39:23.323773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.578 [2024-07-12 10:39:23.323797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:29.578 [2024-07-12 10:39:23.323818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.578 [2024-07-12 10:39:23.324142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.578 [2024-07-12 10:39:23.324193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:29.578 [2024-07-12 10:39:23.324265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:29.578 [2024-07-12 10:39:23.324286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:29.578 pt3 00:25:29.578 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:29.578 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:29.578 10:39:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:29.836 [2024-07-12 10:39:23.499747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:29.836 [2024-07-12 10:39:23.499801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.836 [2024-07-12 10:39:23.499827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:29.836 [2024-07-12 10:39:23.499853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.836 [2024-07-12 10:39:23.500189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.836 [2024-07-12 10:39:23.500241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:29.836 [2024-07-12 10:39:23.500319] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:29.836 [2024-07-12 10:39:23.500340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:29.836 [2024-07-12 10:39:23.500456] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:29.836 [2024-07-12 10:39:23.500468] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:29.836 [2024-07-12 10:39:23.500584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:29.836 [2024-07-12 10:39:23.505628] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:29.836 [2024-07-12 10:39:23.505649] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:29.836 pt4 00:25:29.836 [2024-07-12 10:39:23.505790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.836 10:39:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.093 10:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.093 "name": "raid_bdev1", 00:25:30.093 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:30.093 "strip_size_kb": 64, 00:25:30.093 "state": "online", 00:25:30.093 "raid_level": "raid5f", 00:25:30.093 "superblock": true, 00:25:30.093 "num_base_bdevs": 4, 00:25:30.093 "num_base_bdevs_discovered": 4, 00:25:30.093 "num_base_bdevs_operational": 4, 00:25:30.093 "base_bdevs_list": [ 00:25:30.093 { 00:25:30.093 "name": "pt1", 00:25:30.093 "uuid": "c49f7efb-7c3c-583e-80b1-de63fc113bf7", 00:25:30.093 "is_configured": true, 00:25:30.093 "data_offset": 2048, 00:25:30.093 "data_size": 63488 00:25:30.093 }, 00:25:30.093 { 00:25:30.093 "name": "pt2", 00:25:30.093 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:30.093 "is_configured": true, 00:25:30.093 "data_offset": 2048, 00:25:30.093 "data_size": 63488 00:25:30.093 }, 00:25:30.093 { 00:25:30.093 "name": "pt3", 00:25:30.093 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:30.093 "is_configured": true, 00:25:30.093 "data_offset": 2048, 00:25:30.093 "data_size": 63488 00:25:30.093 }, 00:25:30.093 { 00:25:30.094 "name": "pt4", 00:25:30.094 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:30.094 "is_configured": true, 00:25:30.094 "data_offset": 2048, 00:25:30.094 "data_size": 63488 00:25:30.094 } 00:25:30.094 ] 00:25:30.094 }' 00:25:30.094 10:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.094 10:39:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.659 10:39:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:30.659 10:39:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:30.919 [2024-07-12 10:39:24.603796] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.919 10:39:24 -- bdev/bdev_raid.sh@430 -- # '[' d89b213b-e860-4f62-a35d-f4ff78790cca '!=' d89b213b-e860-4f62-a35d-f4ff78790cca ']' 00:25:30.919 10:39:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:30.919 10:39:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:30.919 10:39:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:30.919 10:39:24 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:31.178 [2024-07-12 10:39:24.855747] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.178 10:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.178 10:39:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.178 "name": "raid_bdev1", 00:25:31.178 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:31.178 "strip_size_kb": 64, 00:25:31.178 "state": "online", 00:25:31.178 "raid_level": "raid5f", 00:25:31.178 "superblock": true, 00:25:31.178 "num_base_bdevs": 4, 00:25:31.178 "num_base_bdevs_discovered": 3, 00:25:31.178 "num_base_bdevs_operational": 3, 00:25:31.178 "base_bdevs_list": [ 00:25:31.178 { 00:25:31.178 "name": null, 00:25:31.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.178 "is_configured": false, 00:25:31.178 "data_offset": 2048, 00:25:31.178 "data_size": 63488 00:25:31.178 }, 00:25:31.178 { 00:25:31.178 "name": "pt2", 00:25:31.178 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:31.178 "is_configured": true, 00:25:31.178 "data_offset": 2048, 00:25:31.178 "data_size": 63488 00:25:31.178 }, 00:25:31.178 { 00:25:31.178 "name": "pt3", 00:25:31.178 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:31.178 "is_configured": true, 00:25:31.178 "data_offset": 2048, 00:25:31.178 "data_size": 63488 00:25:31.178 }, 00:25:31.178 { 00:25:31.178 "name": "pt4", 00:25:31.178 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:31.178 "is_configured": true, 00:25:31.178 "data_offset": 2048, 00:25:31.178 "data_size": 63488 00:25:31.178 } 00:25:31.178 ] 00:25:31.178 }' 00:25:31.178 10:39:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.178 10:39:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.111 10:39:25 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:32.111 [2024-07-12 10:39:25.919933] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:32.111 [2024-07-12 10:39:25.919956] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.112 [2024-07-12 10:39:25.920007] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.112 [2024-07-12 10:39:25.920068] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.112 [2024-07-12 10:39:25.920079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:32.112 10:39:25 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.112 10:39:25 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:32.370 10:39:26 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:32.370 10:39:26 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:32.370 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:32.370 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:32.370 10:39:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:32.628 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:32.628 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:32.628 10:39:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:32.888 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:32.888 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:32.888 10:39:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:33.147 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:33.147 10:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:33.147 10:39:26 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:33.147 10:39:26 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:33.147 10:39:26 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:33.147 [2024-07-12 10:39:27.048137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:33.147 [2024-07-12 10:39:27.048202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.147 [2024-07-12 10:39:27.048233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:33.147 [2024-07-12 10:39:27.048257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.147 [2024-07-12 10:39:27.050063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.147 [2024-07-12 10:39:27.050124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:33.147 [2024-07-12 10:39:27.050211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:33.147 [2024-07-12 10:39:27.050259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:33.147 pt2 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.422 "name": "raid_bdev1", 00:25:33.422 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:33.422 "strip_size_kb": 64, 00:25:33.422 "state": "configuring", 00:25:33.422 "raid_level": "raid5f", 00:25:33.422 "superblock": true, 00:25:33.422 "num_base_bdevs": 4, 00:25:33.422 "num_base_bdevs_discovered": 1, 00:25:33.422 "num_base_bdevs_operational": 3, 00:25:33.422 "base_bdevs_list": [ 00:25:33.422 { 00:25:33.422 "name": null, 00:25:33.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.422 "is_configured": false, 00:25:33.422 "data_offset": 2048, 00:25:33.422 "data_size": 63488 00:25:33.422 }, 00:25:33.422 { 00:25:33.422 "name": "pt2", 00:25:33.422 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:33.422 "is_configured": true, 00:25:33.422 "data_offset": 2048, 00:25:33.422 "data_size": 63488 00:25:33.422 }, 00:25:33.422 { 00:25:33.422 "name": null, 00:25:33.422 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:33.422 "is_configured": false, 00:25:33.422 "data_offset": 2048, 00:25:33.422 "data_size": 63488 00:25:33.422 }, 00:25:33.422 { 00:25:33.422 "name": null, 00:25:33.422 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:33.422 "is_configured": false, 00:25:33.422 "data_offset": 2048, 00:25:33.422 "data_size": 63488 00:25:33.422 } 00:25:33.422 ] 00:25:33.422 }' 00:25:33.422 10:39:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.422 10:39:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.015 10:39:27 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:34.015 10:39:27 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:34.015 10:39:27 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:34.273 [2024-07-12 10:39:28.076283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:34.273 [2024-07-12 10:39:28.076337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.273 [2024-07-12 10:39:28.076368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:34.273 [2024-07-12 10:39:28.076392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.273 [2024-07-12 10:39:28.076750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.273 [2024-07-12 10:39:28.076802] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:34.273 [2024-07-12 10:39:28.076880] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:34.273 [2024-07-12 10:39:28.076902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:34.273 pt3 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.273 10:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.531 10:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.531 "name": "raid_bdev1", 00:25:34.531 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:34.531 "strip_size_kb": 64, 00:25:34.531 "state": "configuring", 00:25:34.531 "raid_level": "raid5f", 00:25:34.531 "superblock": true, 00:25:34.531 "num_base_bdevs": 4, 00:25:34.531 "num_base_bdevs_discovered": 2, 00:25:34.531 "num_base_bdevs_operational": 3, 00:25:34.531 "base_bdevs_list": [ 00:25:34.531 { 00:25:34.531 "name": null, 00:25:34.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.531 "is_configured": false, 00:25:34.531 "data_offset": 2048, 00:25:34.531 "data_size": 63488 00:25:34.531 }, 00:25:34.531 { 00:25:34.531 "name": "pt2", 00:25:34.531 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:34.531 "is_configured": true, 00:25:34.531 "data_offset": 2048, 00:25:34.531 "data_size": 63488 00:25:34.531 }, 00:25:34.531 { 00:25:34.531 "name": "pt3", 00:25:34.531 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:34.531 "is_configured": true, 00:25:34.531 "data_offset": 2048, 00:25:34.531 "data_size": 63488 00:25:34.531 }, 00:25:34.531 { 00:25:34.531 "name": null, 00:25:34.531 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:34.531 "is_configured": false, 00:25:34.531 "data_offset": 2048, 00:25:34.531 "data_size": 63488 00:25:34.531 } 00:25:34.531 ] 00:25:34.531 }' 00:25:34.531 10:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.531 10:39:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.097 10:39:28 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:35.097 10:39:28 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:35.097 10:39:28 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:35.097 10:39:28 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:35.355 [2024-07-12 10:39:29.164476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:35.355 [2024-07-12 10:39:29.164540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.355 [2024-07-12 10:39:29.164587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:35.355 [2024-07-12 10:39:29.164606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.355 [2024-07-12 10:39:29.164995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.355 [2024-07-12 10:39:29.165033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:35.355 [2024-07-12 10:39:29.165111] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:35.355 [2024-07-12 10:39:29.165133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:35.355 [2024-07-12 10:39:29.165239] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:35.355 [2024-07-12 10:39:29.165251] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:35.355 [2024-07-12 10:39:29.165354] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:35.355 [2024-07-12 10:39:29.170533] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:35.355 [2024-07-12 10:39:29.170555] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:35.355 [2024-07-12 10:39:29.170775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.355 pt4 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.355 10:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.356 10:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.356 10:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.613 10:39:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.613 "name": "raid_bdev1", 00:25:35.613 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:35.613 "strip_size_kb": 64, 00:25:35.613 "state": "online", 00:25:35.613 "raid_level": "raid5f", 00:25:35.613 "superblock": true, 00:25:35.613 "num_base_bdevs": 4, 00:25:35.613 "num_base_bdevs_discovered": 3, 00:25:35.613 "num_base_bdevs_operational": 3, 00:25:35.613 "base_bdevs_list": [ 00:25:35.613 { 00:25:35.613 "name": null, 00:25:35.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.613 "is_configured": false, 00:25:35.613 "data_offset": 2048, 00:25:35.613 "data_size": 63488 00:25:35.613 }, 00:25:35.613 { 00:25:35.613 "name": "pt2", 00:25:35.613 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:35.613 "is_configured": true, 00:25:35.613 "data_offset": 2048, 00:25:35.613 "data_size": 63488 00:25:35.613 }, 00:25:35.613 { 00:25:35.613 "name": "pt3", 00:25:35.613 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:35.613 "is_configured": true, 00:25:35.613 "data_offset": 2048, 00:25:35.613 "data_size": 63488 00:25:35.613 }, 00:25:35.613 { 00:25:35.613 "name": "pt4", 00:25:35.613 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:35.613 "is_configured": true, 00:25:35.613 "data_offset": 2048, 00:25:35.613 "data_size": 63488 00:25:35.613 } 00:25:35.613 ] 00:25:35.613 }' 00:25:35.613 10:39:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.613 10:39:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.179 10:39:30 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:36.179 10:39:30 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:36.437 [2024-07-12 10:39:30.297173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.437 [2024-07-12 10:39:30.297195] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.437 [2024-07-12 10:39:30.297241] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.437 [2024-07-12 10:39:30.297298] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.437 [2024-07-12 10:39:30.297309] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:36.437 10:39:30 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.437 10:39:30 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:36.694 10:39:30 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:36.694 10:39:30 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:36.694 10:39:30 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:36.951 [2024-07-12 10:39:30.773266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:36.951 [2024-07-12 10:39:30.773321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.951 [2024-07-12 10:39:30.773354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:36.951 [2024-07-12 10:39:30.773374] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.951 [2024-07-12 10:39:30.775519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.951 [2024-07-12 10:39:30.775585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:36.951 [2024-07-12 10:39:30.775666] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:36.951 [2024-07-12 10:39:30.775710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:36.951 pt1 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.951 10:39:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.209 10:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:37.209 "name": "raid_bdev1", 00:25:37.209 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:37.209 "strip_size_kb": 64, 00:25:37.209 "state": "configuring", 00:25:37.210 "raid_level": "raid5f", 00:25:37.210 "superblock": true, 00:25:37.210 "num_base_bdevs": 4, 00:25:37.210 "num_base_bdevs_discovered": 1, 00:25:37.210 "num_base_bdevs_operational": 4, 00:25:37.210 "base_bdevs_list": [ 00:25:37.210 { 00:25:37.210 "name": "pt1", 00:25:37.210 "uuid": "c49f7efb-7c3c-583e-80b1-de63fc113bf7", 00:25:37.210 "is_configured": true, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 }, 00:25:37.210 { 00:25:37.210 "name": null, 00:25:37.210 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:37.210 "is_configured": false, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 }, 00:25:37.210 { 00:25:37.210 "name": null, 00:25:37.210 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:37.210 "is_configured": false, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 }, 00:25:37.210 { 00:25:37.210 "name": null, 00:25:37.210 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:37.210 "is_configured": false, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 } 00:25:37.210 ] 00:25:37.210 }' 00:25:37.210 10:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:37.210 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:37.776 10:39:31 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:37.776 10:39:31 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:37.776 10:39:31 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:38.034 10:39:31 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:38.034 10:39:31 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:38.034 10:39:31 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:38.291 10:39:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:38.292 10:39:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:38.292 10:39:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:38.550 10:39:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:38.550 10:39:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:38.550 10:39:32 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:38.550 10:39:32 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:38.809 [2024-07-12 10:39:32.473550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:38.809 [2024-07-12 10:39:32.473609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.809 [2024-07-12 10:39:32.473636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:38.809 [2024-07-12 10:39:32.473660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.809 [2024-07-12 10:39:32.473999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.809 [2024-07-12 10:39:32.474053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:38.809 [2024-07-12 10:39:32.474132] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:38.809 [2024-07-12 10:39:32.474145] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:38.809 [2024-07-12 10:39:32.474151] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.809 [2024-07-12 10:39:32.474166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:25:38.809 [2024-07-12 10:39:32.474221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:38.809 pt4 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:38.809 "name": "raid_bdev1", 00:25:38.809 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:38.809 "strip_size_kb": 64, 00:25:38.809 "state": "configuring", 00:25:38.809 "raid_level": "raid5f", 00:25:38.809 "superblock": true, 00:25:38.809 "num_base_bdevs": 4, 00:25:38.809 "num_base_bdevs_discovered": 1, 00:25:38.809 "num_base_bdevs_operational": 3, 00:25:38.809 "base_bdevs_list": [ 00:25:38.809 { 00:25:38.809 "name": null, 00:25:38.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.809 "is_configured": false, 00:25:38.809 "data_offset": 2048, 00:25:38.809 "data_size": 63488 00:25:38.809 }, 00:25:38.809 { 00:25:38.809 "name": null, 00:25:38.809 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:38.809 "is_configured": false, 00:25:38.809 "data_offset": 2048, 00:25:38.809 "data_size": 63488 00:25:38.809 }, 00:25:38.809 { 00:25:38.809 "name": null, 00:25:38.809 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:38.809 "is_configured": false, 00:25:38.809 "data_offset": 2048, 00:25:38.809 "data_size": 63488 00:25:38.809 }, 00:25:38.809 { 00:25:38.809 "name": "pt4", 00:25:38.809 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:38.809 "is_configured": true, 00:25:38.809 "data_offset": 2048, 00:25:38.809 "data_size": 63488 00:25:38.809 } 00:25:38.809 ] 00:25:38.809 }' 00:25:38.809 10:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:38.809 10:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:39.743 [2024-07-12 10:39:33.497699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:39.743 [2024-07-12 10:39:33.497787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.743 [2024-07-12 10:39:33.497819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:39.743 [2024-07-12 10:39:33.497842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.743 [2024-07-12 10:39:33.498195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.743 [2024-07-12 10:39:33.498251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:39.743 [2024-07-12 10:39:33.498327] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:39.743 [2024-07-12 10:39:33.498347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:39.743 pt2 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:39.743 10:39:33 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:40.002 [2024-07-12 10:39:33.681746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:40.002 [2024-07-12 10:39:33.681805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.002 [2024-07-12 10:39:33.681832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:25:40.002 [2024-07-12 10:39:33.681856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.002 [2024-07-12 10:39:33.682202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.002 [2024-07-12 10:39:33.682256] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:40.002 [2024-07-12 10:39:33.682333] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:40.002 [2024-07-12 10:39:33.682354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:40.002 [2024-07-12 10:39:33.682457] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:25:40.002 [2024-07-12 10:39:33.682469] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:40.002 [2024-07-12 10:39:33.682553] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:40.002 [2024-07-12 10:39:33.687573] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:25:40.002 [2024-07-12 10:39:33.687596] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:25:40.002 [2024-07-12 10:39:33.687794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.002 pt3 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.002 10:39:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.002 "name": "raid_bdev1", 00:25:40.002 "uuid": "d89b213b-e860-4f62-a35d-f4ff78790cca", 00:25:40.002 "strip_size_kb": 64, 00:25:40.002 "state": "online", 00:25:40.002 "raid_level": "raid5f", 00:25:40.002 "superblock": true, 00:25:40.002 "num_base_bdevs": 4, 00:25:40.002 "num_base_bdevs_discovered": 3, 00:25:40.002 "num_base_bdevs_operational": 3, 00:25:40.002 "base_bdevs_list": [ 00:25:40.002 { 00:25:40.002 "name": null, 00:25:40.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.002 "is_configured": false, 00:25:40.002 "data_offset": 2048, 00:25:40.002 "data_size": 63488 00:25:40.002 }, 00:25:40.002 { 00:25:40.002 "name": "pt2", 00:25:40.002 "uuid": "a62911fc-5fb9-5984-808f-985256a0ac14", 00:25:40.002 "is_configured": true, 00:25:40.002 "data_offset": 2048, 00:25:40.002 "data_size": 63488 00:25:40.002 }, 00:25:40.003 { 00:25:40.003 "name": "pt3", 00:25:40.003 "uuid": "03b77a26-386b-5880-adaa-de5a44bc4b5d", 00:25:40.003 "is_configured": true, 00:25:40.003 "data_offset": 2048, 00:25:40.003 "data_size": 63488 00:25:40.003 }, 00:25:40.003 { 00:25:40.003 "name": "pt4", 00:25:40.003 "uuid": "9dbe81b9-742e-5975-8970-7b8c46bd99e0", 00:25:40.003 "is_configured": true, 00:25:40.003 "data_offset": 2048, 00:25:40.003 "data_size": 63488 00:25:40.003 } 00:25:40.003 ] 00:25:40.003 }' 00:25:40.003 10:39:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.003 10:39:33 -- common/autotest_common.sh@10 -- # set +x 00:25:40.939 10:39:34 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.939 10:39:34 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:40.939 [2024-07-12 10:39:34.721261] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.939 10:39:34 -- bdev/bdev_raid.sh@506 -- # '[' d89b213b-e860-4f62-a35d-f4ff78790cca '!=' d89b213b-e860-4f62-a35d-f4ff78790cca ']' 00:25:40.939 10:39:34 -- bdev/bdev_raid.sh@511 -- # killprocess 134239 00:25:40.939 10:39:34 -- common/autotest_common.sh@926 -- # '[' -z 134239 ']' 00:25:40.939 10:39:34 -- common/autotest_common.sh@930 -- # kill -0 134239 00:25:40.939 10:39:34 -- common/autotest_common.sh@931 -- # uname 00:25:40.939 10:39:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:40.939 10:39:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134239 00:25:40.939 killing process with pid 134239 00:25:40.939 10:39:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:40.939 10:39:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:40.939 10:39:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134239' 00:25:40.939 10:39:34 -- common/autotest_common.sh@945 -- # kill 134239 00:25:40.939 10:39:34 -- common/autotest_common.sh@950 -- # wait 134239 00:25:40.939 [2024-07-12 10:39:34.749395] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:40.939 [2024-07-12 10:39:34.749450] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:40.939 [2024-07-12 10:39:34.749508] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:40.939 [2024-07-12 10:39:34.749518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:25:41.198 [2024-07-12 10:39:35.016225] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:42.131 ************************************ 00:25:42.131 END TEST raid5f_superblock_test 00:25:42.131 ************************************ 00:25:42.131 10:39:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:42.131 00:25:42.131 real 0m21.362s 00:25:42.131 user 0m39.690s 00:25:42.131 sys 0m2.221s 00:25:42.131 10:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.131 10:39:36 -- common/autotest_common.sh@10 -- # set +x 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:42.390 10:39:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:42.390 10:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.390 10:39:36 -- common/autotest_common.sh@10 -- # set +x 00:25:42.390 ************************************ 00:25:42.390 START TEST raid5f_rebuild_test 00:25:42.390 ************************************ 00:25:42.390 10:39:36 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=134940 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134940 /var/tmp/spdk-raid.sock 00:25:42.390 10:39:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:42.390 10:39:36 -- common/autotest_common.sh@819 -- # '[' -z 134940 ']' 00:25:42.390 10:39:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:42.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:42.391 10:39:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:42.391 10:39:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:42.391 10:39:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:42.391 10:39:36 -- common/autotest_common.sh@10 -- # set +x 00:25:42.391 [2024-07-12 10:39:36.164881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:42.391 [2024-07-12 10:39:36.165068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134940 ] 00:25:42.391 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:42.391 Zero copy mechanism will not be used. 00:25:42.649 [2024-07-12 10:39:36.337746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.908 [2024-07-12 10:39:36.592322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.908 [2024-07-12 10:39:36.776992] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.166 10:39:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:43.166 10:39:37 -- common/autotest_common.sh@852 -- # return 0 00:25:43.166 10:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.166 10:39:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:43.166 10:39:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:43.423 BaseBdev1 00:25:43.423 10:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.423 10:39:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:43.423 10:39:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:43.681 BaseBdev2 00:25:43.681 10:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.681 10:39:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:43.681 10:39:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:43.939 BaseBdev3 00:25:43.939 10:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.939 10:39:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:43.939 10:39:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:44.197 BaseBdev4 00:25:44.197 10:39:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:44.454 spare_malloc 00:25:44.454 10:39:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:44.713 spare_delay 00:25:44.713 10:39:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:44.713 [2024-07-12 10:39:38.575781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:44.713 [2024-07-12 10:39:38.575861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.713 [2024-07-12 10:39:38.575893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:44.713 [2024-07-12 10:39:38.575932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.713 [2024-07-12 10:39:38.577746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.713 [2024-07-12 10:39:38.577790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:44.713 spare 00:25:44.713 10:39:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:44.971 [2024-07-12 10:39:38.795868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.972 [2024-07-12 10:39:38.797422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:44.972 [2024-07-12 10:39:38.797474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:44.972 [2024-07-12 10:39:38.797510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:44.972 [2024-07-12 10:39:38.797579] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:44.972 [2024-07-12 10:39:38.797590] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:44.972 [2024-07-12 10:39:38.797719] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:44.972 [2024-07-12 10:39:38.802858] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:44.972 [2024-07-12 10:39:38.802879] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:44.972 [2024-07-12 10:39:38.803061] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.972 10:39:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.230 10:39:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.230 "name": "raid_bdev1", 00:25:45.230 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:45.230 "strip_size_kb": 64, 00:25:45.230 "state": "online", 00:25:45.230 "raid_level": "raid5f", 00:25:45.230 "superblock": false, 00:25:45.230 "num_base_bdevs": 4, 00:25:45.230 "num_base_bdevs_discovered": 4, 00:25:45.230 "num_base_bdevs_operational": 4, 00:25:45.230 "base_bdevs_list": [ 00:25:45.230 { 00:25:45.230 "name": "BaseBdev1", 00:25:45.230 "uuid": "834655ce-456c-4d48-ad43-618939900937", 00:25:45.230 "is_configured": true, 00:25:45.230 "data_offset": 0, 00:25:45.230 "data_size": 65536 00:25:45.230 }, 00:25:45.230 { 00:25:45.230 "name": "BaseBdev2", 00:25:45.230 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:45.230 "is_configured": true, 00:25:45.230 "data_offset": 0, 00:25:45.230 "data_size": 65536 00:25:45.230 }, 00:25:45.230 { 00:25:45.230 "name": "BaseBdev3", 00:25:45.230 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:45.230 "is_configured": true, 00:25:45.230 "data_offset": 0, 00:25:45.230 "data_size": 65536 00:25:45.230 }, 00:25:45.230 { 00:25:45.230 "name": "BaseBdev4", 00:25:45.230 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:45.230 "is_configured": true, 00:25:45.230 "data_offset": 0, 00:25:45.230 "data_size": 65536 00:25:45.230 } 00:25:45.230 ] 00:25:45.230 }' 00:25:45.230 10:39:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.230 10:39:38 -- common/autotest_common.sh@10 -- # set +x 00:25:45.796 10:39:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:45.796 10:39:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:46.054 [2024-07-12 10:39:39.841914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:46.054 10:39:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:46.054 10:39:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.054 10:39:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:46.312 10:39:40 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:46.312 10:39:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:46.312 10:39:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:46.312 10:39:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@12 -- # local i 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.312 10:39:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:46.312 [2024-07-12 10:39:40.205841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:46.570 /dev/nbd0 00:25:46.570 10:39:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:46.570 10:39:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:46.570 10:39:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:46.570 10:39:40 -- common/autotest_common.sh@857 -- # local i 00:25:46.570 10:39:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:46.570 10:39:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:46.570 10:39:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:46.570 10:39:40 -- common/autotest_common.sh@861 -- # break 00:25:46.570 10:39:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:46.570 10:39:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:46.570 10:39:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.570 1+0 records in 00:25:46.570 1+0 records out 00:25:46.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259637 s, 15.8 MB/s 00:25:46.570 10:39:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.570 10:39:40 -- common/autotest_common.sh@874 -- # size=4096 00:25:46.570 10:39:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.570 10:39:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:46.570 10:39:40 -- common/autotest_common.sh@877 -- # return 0 00:25:46.570 10:39:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.570 10:39:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.570 10:39:40 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:46.570 10:39:40 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:46.570 10:39:40 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:46.570 10:39:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:47.137 512+0 records in 00:25:47.137 512+0 records out 00:25:47.137 100663296 bytes (101 MB, 96 MiB) copied, 0.517058 s, 195 MB/s 00:25:47.137 10:39:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@51 -- # local i 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.137 10:39:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.137 10:39:41 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:47.137 [2024-07-12 10:39:41.043326] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.395 10:39:41 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:47.395 10:39:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.396 10:39:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.396 10:39:41 -- bdev/nbd_common.sh@41 -- # break 00:25:47.396 10:39:41 -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.396 10:39:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:47.654 [2024-07-12 10:39:41.375209] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:47.654 10:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.655 10:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.655 10:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.655 10:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.655 10:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.655 10:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.913 10:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.913 "name": "raid_bdev1", 00:25:47.913 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:47.913 "strip_size_kb": 64, 00:25:47.913 "state": "online", 00:25:47.913 "raid_level": "raid5f", 00:25:47.913 "superblock": false, 00:25:47.913 "num_base_bdevs": 4, 00:25:47.913 "num_base_bdevs_discovered": 3, 00:25:47.913 "num_base_bdevs_operational": 3, 00:25:47.913 "base_bdevs_list": [ 00:25:47.913 { 00:25:47.913 "name": null, 00:25:47.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.913 "is_configured": false, 00:25:47.913 "data_offset": 0, 00:25:47.913 "data_size": 65536 00:25:47.913 }, 00:25:47.913 { 00:25:47.913 "name": "BaseBdev2", 00:25:47.913 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:47.913 "is_configured": true, 00:25:47.913 "data_offset": 0, 00:25:47.913 "data_size": 65536 00:25:47.913 }, 00:25:47.913 { 00:25:47.913 "name": "BaseBdev3", 00:25:47.913 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:47.913 "is_configured": true, 00:25:47.913 "data_offset": 0, 00:25:47.913 "data_size": 65536 00:25:47.913 }, 00:25:47.913 { 00:25:47.913 "name": "BaseBdev4", 00:25:47.913 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:47.913 "is_configured": true, 00:25:47.913 "data_offset": 0, 00:25:47.913 "data_size": 65536 00:25:47.913 } 00:25:47.913 ] 00:25:47.913 }' 00:25:47.913 10:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.913 10:39:41 -- common/autotest_common.sh@10 -- # set +x 00:25:48.479 10:39:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:48.737 [2024-07-12 10:39:42.471389] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:48.737 [2024-07-12 10:39:42.471435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.737 [2024-07-12 10:39:42.481492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:25:48.737 [2024-07-12 10:39:42.487915] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:48.737 10:39:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.668 10:39:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:49.925 "name": "raid_bdev1", 00:25:49.925 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:49.925 "strip_size_kb": 64, 00:25:49.925 "state": "online", 00:25:49.925 "raid_level": "raid5f", 00:25:49.925 "superblock": false, 00:25:49.925 "num_base_bdevs": 4, 00:25:49.925 "num_base_bdevs_discovered": 4, 00:25:49.925 "num_base_bdevs_operational": 4, 00:25:49.925 "process": { 00:25:49.925 "type": "rebuild", 00:25:49.925 "target": "spare", 00:25:49.925 "progress": { 00:25:49.925 "blocks": 23040, 00:25:49.925 "percent": 11 00:25:49.925 } 00:25:49.925 }, 00:25:49.925 "base_bdevs_list": [ 00:25:49.925 { 00:25:49.925 "name": "spare", 00:25:49.925 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:49.925 "is_configured": true, 00:25:49.925 "data_offset": 0, 00:25:49.925 "data_size": 65536 00:25:49.925 }, 00:25:49.925 { 00:25:49.925 "name": "BaseBdev2", 00:25:49.925 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:49.925 "is_configured": true, 00:25:49.925 "data_offset": 0, 00:25:49.925 "data_size": 65536 00:25:49.925 }, 00:25:49.925 { 00:25:49.925 "name": "BaseBdev3", 00:25:49.925 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:49.925 "is_configured": true, 00:25:49.925 "data_offset": 0, 00:25:49.925 "data_size": 65536 00:25:49.925 }, 00:25:49.925 { 00:25:49.925 "name": "BaseBdev4", 00:25:49.925 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:49.925 "is_configured": true, 00:25:49.925 "data_offset": 0, 00:25:49.925 "data_size": 65536 00:25:49.925 } 00:25:49.925 ] 00:25:49.925 }' 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.925 10:39:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:50.182 [2024-07-12 10:39:44.041336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.439 [2024-07-12 10:39:44.098747] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.439 [2024-07-12 10:39:44.098849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.439 "name": "raid_bdev1", 00:25:50.439 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:50.439 "strip_size_kb": 64, 00:25:50.439 "state": "online", 00:25:50.439 "raid_level": "raid5f", 00:25:50.439 "superblock": false, 00:25:50.439 "num_base_bdevs": 4, 00:25:50.439 "num_base_bdevs_discovered": 3, 00:25:50.439 "num_base_bdevs_operational": 3, 00:25:50.439 "base_bdevs_list": [ 00:25:50.439 { 00:25:50.439 "name": null, 00:25:50.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.439 "is_configured": false, 00:25:50.439 "data_offset": 0, 00:25:50.439 "data_size": 65536 00:25:50.439 }, 00:25:50.439 { 00:25:50.439 "name": "BaseBdev2", 00:25:50.439 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:50.439 "is_configured": true, 00:25:50.439 "data_offset": 0, 00:25:50.439 "data_size": 65536 00:25:50.439 }, 00:25:50.439 { 00:25:50.439 "name": "BaseBdev3", 00:25:50.439 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:50.439 "is_configured": true, 00:25:50.439 "data_offset": 0, 00:25:50.439 "data_size": 65536 00:25:50.439 }, 00:25:50.439 { 00:25:50.439 "name": "BaseBdev4", 00:25:50.439 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:50.439 "is_configured": true, 00:25:50.439 "data_offset": 0, 00:25:50.439 "data_size": 65536 00:25:50.439 } 00:25:50.439 ] 00:25:50.439 }' 00:25:50.439 10:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.439 10:39:44 -- common/autotest_common.sh@10 -- # set +x 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:51.372 "name": "raid_bdev1", 00:25:51.372 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:51.372 "strip_size_kb": 64, 00:25:51.372 "state": "online", 00:25:51.372 "raid_level": "raid5f", 00:25:51.372 "superblock": false, 00:25:51.372 "num_base_bdevs": 4, 00:25:51.372 "num_base_bdevs_discovered": 3, 00:25:51.372 "num_base_bdevs_operational": 3, 00:25:51.372 "base_bdevs_list": [ 00:25:51.372 { 00:25:51.372 "name": null, 00:25:51.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.372 "is_configured": false, 00:25:51.372 "data_offset": 0, 00:25:51.372 "data_size": 65536 00:25:51.372 }, 00:25:51.372 { 00:25:51.372 "name": "BaseBdev2", 00:25:51.372 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:51.372 "is_configured": true, 00:25:51.372 "data_offset": 0, 00:25:51.372 "data_size": 65536 00:25:51.372 }, 00:25:51.372 { 00:25:51.372 "name": "BaseBdev3", 00:25:51.372 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:51.372 "is_configured": true, 00:25:51.372 "data_offset": 0, 00:25:51.372 "data_size": 65536 00:25:51.372 }, 00:25:51.372 { 00:25:51.372 "name": "BaseBdev4", 00:25:51.372 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:51.372 "is_configured": true, 00:25:51.372 "data_offset": 0, 00:25:51.372 "data_size": 65536 00:25:51.372 } 00:25:51.372 ] 00:25:51.372 }' 00:25:51.372 10:39:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:51.629 10:39:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:51.629 10:39:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:51.629 10:39:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:51.629 10:39:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:51.886 [2024-07-12 10:39:45.551775] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:51.886 [2024-07-12 10:39:45.551812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:51.886 [2024-07-12 10:39:45.561297] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:25:51.886 [2024-07-12 10:39:45.568446] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:51.886 10:39:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.819 10:39:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.077 10:39:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.077 "name": "raid_bdev1", 00:25:53.077 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:53.077 "strip_size_kb": 64, 00:25:53.077 "state": "online", 00:25:53.077 "raid_level": "raid5f", 00:25:53.077 "superblock": false, 00:25:53.077 "num_base_bdevs": 4, 00:25:53.077 "num_base_bdevs_discovered": 4, 00:25:53.077 "num_base_bdevs_operational": 4, 00:25:53.077 "process": { 00:25:53.077 "type": "rebuild", 00:25:53.077 "target": "spare", 00:25:53.077 "progress": { 00:25:53.077 "blocks": 23040, 00:25:53.077 "percent": 11 00:25:53.077 } 00:25:53.077 }, 00:25:53.077 "base_bdevs_list": [ 00:25:53.077 { 00:25:53.077 "name": "spare", 00:25:53.077 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:53.077 "is_configured": true, 00:25:53.077 "data_offset": 0, 00:25:53.077 "data_size": 65536 00:25:53.077 }, 00:25:53.077 { 00:25:53.077 "name": "BaseBdev2", 00:25:53.077 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:53.077 "is_configured": true, 00:25:53.077 "data_offset": 0, 00:25:53.077 "data_size": 65536 00:25:53.077 }, 00:25:53.077 { 00:25:53.077 "name": "BaseBdev3", 00:25:53.077 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:53.077 "is_configured": true, 00:25:53.077 "data_offset": 0, 00:25:53.077 "data_size": 65536 00:25:53.077 }, 00:25:53.077 { 00:25:53.077 "name": "BaseBdev4", 00:25:53.077 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:53.077 "is_configured": true, 00:25:53.077 "data_offset": 0, 00:25:53.077 "data_size": 65536 00:25:53.078 } 00:25:53.078 ] 00:25:53.078 }' 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@657 -- # local timeout=699 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.078 10:39:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.335 10:39:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.336 "name": "raid_bdev1", 00:25:53.336 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:53.336 "strip_size_kb": 64, 00:25:53.336 "state": "online", 00:25:53.336 "raid_level": "raid5f", 00:25:53.336 "superblock": false, 00:25:53.336 "num_base_bdevs": 4, 00:25:53.336 "num_base_bdevs_discovered": 4, 00:25:53.336 "num_base_bdevs_operational": 4, 00:25:53.336 "process": { 00:25:53.336 "type": "rebuild", 00:25:53.336 "target": "spare", 00:25:53.336 "progress": { 00:25:53.336 "blocks": 28800, 00:25:53.336 "percent": 14 00:25:53.336 } 00:25:53.336 }, 00:25:53.336 "base_bdevs_list": [ 00:25:53.336 { 00:25:53.336 "name": "spare", 00:25:53.336 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:53.336 "is_configured": true, 00:25:53.336 "data_offset": 0, 00:25:53.336 "data_size": 65536 00:25:53.336 }, 00:25:53.336 { 00:25:53.336 "name": "BaseBdev2", 00:25:53.336 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:53.336 "is_configured": true, 00:25:53.336 "data_offset": 0, 00:25:53.336 "data_size": 65536 00:25:53.336 }, 00:25:53.336 { 00:25:53.336 "name": "BaseBdev3", 00:25:53.336 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:53.336 "is_configured": true, 00:25:53.336 "data_offset": 0, 00:25:53.336 "data_size": 65536 00:25:53.336 }, 00:25:53.336 { 00:25:53.336 "name": "BaseBdev4", 00:25:53.336 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:53.336 "is_configured": true, 00:25:53.336 "data_offset": 0, 00:25:53.336 "data_size": 65536 00:25:53.336 } 00:25:53.336 ] 00:25:53.336 }' 00:25:53.336 10:39:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.336 10:39:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.336 10:39:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.594 10:39:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.594 10:39:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.529 10:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.787 "name": "raid_bdev1", 00:25:54.787 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:54.787 "strip_size_kb": 64, 00:25:54.787 "state": "online", 00:25:54.787 "raid_level": "raid5f", 00:25:54.787 "superblock": false, 00:25:54.787 "num_base_bdevs": 4, 00:25:54.787 "num_base_bdevs_discovered": 4, 00:25:54.787 "num_base_bdevs_operational": 4, 00:25:54.787 "process": { 00:25:54.787 "type": "rebuild", 00:25:54.787 "target": "spare", 00:25:54.787 "progress": { 00:25:54.787 "blocks": 55680, 00:25:54.787 "percent": 28 00:25:54.787 } 00:25:54.787 }, 00:25:54.787 "base_bdevs_list": [ 00:25:54.787 { 00:25:54.787 "name": "spare", 00:25:54.787 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:54.787 "is_configured": true, 00:25:54.787 "data_offset": 0, 00:25:54.787 "data_size": 65536 00:25:54.787 }, 00:25:54.787 { 00:25:54.787 "name": "BaseBdev2", 00:25:54.787 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:54.787 "is_configured": true, 00:25:54.787 "data_offset": 0, 00:25:54.787 "data_size": 65536 00:25:54.787 }, 00:25:54.787 { 00:25:54.787 "name": "BaseBdev3", 00:25:54.787 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:54.787 "is_configured": true, 00:25:54.787 "data_offset": 0, 00:25:54.787 "data_size": 65536 00:25:54.787 }, 00:25:54.787 { 00:25:54.787 "name": "BaseBdev4", 00:25:54.787 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:54.787 "is_configured": true, 00:25:54.787 "data_offset": 0, 00:25:54.787 "data_size": 65536 00:25:54.787 } 00:25:54.787 ] 00:25:54.787 }' 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.787 10:39:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.160 10:39:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.160 "name": "raid_bdev1", 00:25:56.160 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:56.160 "strip_size_kb": 64, 00:25:56.160 "state": "online", 00:25:56.160 "raid_level": "raid5f", 00:25:56.160 "superblock": false, 00:25:56.160 "num_base_bdevs": 4, 00:25:56.160 "num_base_bdevs_discovered": 4, 00:25:56.160 "num_base_bdevs_operational": 4, 00:25:56.160 "process": { 00:25:56.160 "type": "rebuild", 00:25:56.160 "target": "spare", 00:25:56.160 "progress": { 00:25:56.160 "blocks": 80640, 00:25:56.160 "percent": 41 00:25:56.160 } 00:25:56.160 }, 00:25:56.160 "base_bdevs_list": [ 00:25:56.160 { 00:25:56.160 "name": "spare", 00:25:56.160 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:56.160 "is_configured": true, 00:25:56.160 "data_offset": 0, 00:25:56.160 "data_size": 65536 00:25:56.160 }, 00:25:56.161 { 00:25:56.161 "name": "BaseBdev2", 00:25:56.161 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:56.161 "is_configured": true, 00:25:56.161 "data_offset": 0, 00:25:56.161 "data_size": 65536 00:25:56.161 }, 00:25:56.161 { 00:25:56.161 "name": "BaseBdev3", 00:25:56.161 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:56.161 "is_configured": true, 00:25:56.161 "data_offset": 0, 00:25:56.161 "data_size": 65536 00:25:56.161 }, 00:25:56.161 { 00:25:56.161 "name": "BaseBdev4", 00:25:56.161 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:56.161 "is_configured": true, 00:25:56.161 "data_offset": 0, 00:25:56.161 "data_size": 65536 00:25:56.161 } 00:25:56.161 ] 00:25:56.161 }' 00:25:56.161 10:39:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.161 10:39:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:56.161 10:39:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.161 10:39:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:56.161 10:39:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.094 10:39:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.352 10:39:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:57.352 "name": "raid_bdev1", 00:25:57.352 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:57.352 "strip_size_kb": 64, 00:25:57.352 "state": "online", 00:25:57.352 "raid_level": "raid5f", 00:25:57.352 "superblock": false, 00:25:57.352 "num_base_bdevs": 4, 00:25:57.352 "num_base_bdevs_discovered": 4, 00:25:57.352 "num_base_bdevs_operational": 4, 00:25:57.352 "process": { 00:25:57.352 "type": "rebuild", 00:25:57.352 "target": "spare", 00:25:57.352 "progress": { 00:25:57.352 "blocks": 107520, 00:25:57.352 "percent": 54 00:25:57.352 } 00:25:57.352 }, 00:25:57.352 "base_bdevs_list": [ 00:25:57.352 { 00:25:57.352 "name": "spare", 00:25:57.352 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:57.352 "is_configured": true, 00:25:57.352 "data_offset": 0, 00:25:57.352 "data_size": 65536 00:25:57.352 }, 00:25:57.352 { 00:25:57.352 "name": "BaseBdev2", 00:25:57.352 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:57.352 "is_configured": true, 00:25:57.352 "data_offset": 0, 00:25:57.352 "data_size": 65536 00:25:57.352 }, 00:25:57.352 { 00:25:57.352 "name": "BaseBdev3", 00:25:57.352 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:57.352 "is_configured": true, 00:25:57.352 "data_offset": 0, 00:25:57.352 "data_size": 65536 00:25:57.352 }, 00:25:57.352 { 00:25:57.352 "name": "BaseBdev4", 00:25:57.352 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:57.352 "is_configured": true, 00:25:57.352 "data_offset": 0, 00:25:57.352 "data_size": 65536 00:25:57.352 } 00:25:57.352 ] 00:25:57.352 }' 00:25:57.352 10:39:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:57.610 10:39:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.610 10:39:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:57.610 10:39:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.610 10:39:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.545 10:39:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:58.803 "name": "raid_bdev1", 00:25:58.803 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:25:58.803 "strip_size_kb": 64, 00:25:58.803 "state": "online", 00:25:58.803 "raid_level": "raid5f", 00:25:58.803 "superblock": false, 00:25:58.803 "num_base_bdevs": 4, 00:25:58.803 "num_base_bdevs_discovered": 4, 00:25:58.803 "num_base_bdevs_operational": 4, 00:25:58.803 "process": { 00:25:58.803 "type": "rebuild", 00:25:58.803 "target": "spare", 00:25:58.803 "progress": { 00:25:58.803 "blocks": 132480, 00:25:58.803 "percent": 67 00:25:58.803 } 00:25:58.803 }, 00:25:58.803 "base_bdevs_list": [ 00:25:58.803 { 00:25:58.803 "name": "spare", 00:25:58.803 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:25:58.803 "is_configured": true, 00:25:58.803 "data_offset": 0, 00:25:58.803 "data_size": 65536 00:25:58.803 }, 00:25:58.803 { 00:25:58.803 "name": "BaseBdev2", 00:25:58.803 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:25:58.803 "is_configured": true, 00:25:58.803 "data_offset": 0, 00:25:58.803 "data_size": 65536 00:25:58.803 }, 00:25:58.803 { 00:25:58.803 "name": "BaseBdev3", 00:25:58.803 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:25:58.803 "is_configured": true, 00:25:58.803 "data_offset": 0, 00:25:58.803 "data_size": 65536 00:25:58.803 }, 00:25:58.803 { 00:25:58.803 "name": "BaseBdev4", 00:25:58.803 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:25:58.803 "is_configured": true, 00:25:58.803 "data_offset": 0, 00:25:58.803 "data_size": 65536 00:25:58.803 } 00:25:58.803 ] 00:25:58.803 }' 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.803 10:39:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:00.175 "name": "raid_bdev1", 00:26:00.175 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:26:00.175 "strip_size_kb": 64, 00:26:00.175 "state": "online", 00:26:00.175 "raid_level": "raid5f", 00:26:00.175 "superblock": false, 00:26:00.175 "num_base_bdevs": 4, 00:26:00.175 "num_base_bdevs_discovered": 4, 00:26:00.175 "num_base_bdevs_operational": 4, 00:26:00.175 "process": { 00:26:00.175 "type": "rebuild", 00:26:00.175 "target": "spare", 00:26:00.175 "progress": { 00:26:00.175 "blocks": 157440, 00:26:00.175 "percent": 80 00:26:00.175 } 00:26:00.175 }, 00:26:00.175 "base_bdevs_list": [ 00:26:00.175 { 00:26:00.175 "name": "spare", 00:26:00.175 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:26:00.175 "is_configured": true, 00:26:00.175 "data_offset": 0, 00:26:00.175 "data_size": 65536 00:26:00.175 }, 00:26:00.175 { 00:26:00.175 "name": "BaseBdev2", 00:26:00.175 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:26:00.175 "is_configured": true, 00:26:00.175 "data_offset": 0, 00:26:00.175 "data_size": 65536 00:26:00.175 }, 00:26:00.175 { 00:26:00.175 "name": "BaseBdev3", 00:26:00.175 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:26:00.175 "is_configured": true, 00:26:00.175 "data_offset": 0, 00:26:00.175 "data_size": 65536 00:26:00.175 }, 00:26:00.175 { 00:26:00.175 "name": "BaseBdev4", 00:26:00.175 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:26:00.175 "is_configured": true, 00:26:00.175 "data_offset": 0, 00:26:00.175 "data_size": 65536 00:26:00.175 } 00:26:00.175 ] 00:26:00.175 }' 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.175 10:39:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:00.175 10:39:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.175 10:39:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.186 10:39:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.454 10:39:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.454 "name": "raid_bdev1", 00:26:01.454 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:26:01.454 "strip_size_kb": 64, 00:26:01.454 "state": "online", 00:26:01.454 "raid_level": "raid5f", 00:26:01.454 "superblock": false, 00:26:01.454 "num_base_bdevs": 4, 00:26:01.454 "num_base_bdevs_discovered": 4, 00:26:01.454 "num_base_bdevs_operational": 4, 00:26:01.454 "process": { 00:26:01.454 "type": "rebuild", 00:26:01.454 "target": "spare", 00:26:01.454 "progress": { 00:26:01.454 "blocks": 184320, 00:26:01.454 "percent": 93 00:26:01.454 } 00:26:01.454 }, 00:26:01.454 "base_bdevs_list": [ 00:26:01.454 { 00:26:01.454 "name": "spare", 00:26:01.454 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:26:01.454 "is_configured": true, 00:26:01.454 "data_offset": 0, 00:26:01.454 "data_size": 65536 00:26:01.454 }, 00:26:01.454 { 00:26:01.454 "name": "BaseBdev2", 00:26:01.454 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:26:01.454 "is_configured": true, 00:26:01.454 "data_offset": 0, 00:26:01.454 "data_size": 65536 00:26:01.454 }, 00:26:01.454 { 00:26:01.454 "name": "BaseBdev3", 00:26:01.454 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:26:01.454 "is_configured": true, 00:26:01.454 "data_offset": 0, 00:26:01.454 "data_size": 65536 00:26:01.454 }, 00:26:01.454 { 00:26:01.454 "name": "BaseBdev4", 00:26:01.454 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:26:01.454 "is_configured": true, 00:26:01.454 "data_offset": 0, 00:26:01.454 "data_size": 65536 00:26:01.454 } 00:26:01.454 ] 00:26:01.454 }' 00:26:01.454 10:39:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:01.454 10:39:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.454 10:39:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:01.712 10:39:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.712 10:39:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:02.276 [2024-07-12 10:39:55.936751] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:02.276 [2024-07-12 10:39:55.936823] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:02.276 [2024-07-12 10:39:55.936901] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.534 10:39:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.792 10:39:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:02.792 "name": "raid_bdev1", 00:26:02.792 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:26:02.792 "strip_size_kb": 64, 00:26:02.792 "state": "online", 00:26:02.792 "raid_level": "raid5f", 00:26:02.792 "superblock": false, 00:26:02.792 "num_base_bdevs": 4, 00:26:02.792 "num_base_bdevs_discovered": 4, 00:26:02.792 "num_base_bdevs_operational": 4, 00:26:02.792 "base_bdevs_list": [ 00:26:02.792 { 00:26:02.792 "name": "spare", 00:26:02.792 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:26:02.792 "is_configured": true, 00:26:02.792 "data_offset": 0, 00:26:02.792 "data_size": 65536 00:26:02.792 }, 00:26:02.792 { 00:26:02.792 "name": "BaseBdev2", 00:26:02.792 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:26:02.792 "is_configured": true, 00:26:02.792 "data_offset": 0, 00:26:02.792 "data_size": 65536 00:26:02.792 }, 00:26:02.792 { 00:26:02.792 "name": "BaseBdev3", 00:26:02.792 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:26:02.792 "is_configured": true, 00:26:02.792 "data_offset": 0, 00:26:02.792 "data_size": 65536 00:26:02.792 }, 00:26:02.792 { 00:26:02.792 "name": "BaseBdev4", 00:26:02.792 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:26:02.792 "is_configured": true, 00:26:02.792 "data_offset": 0, 00:26:02.792 "data_size": 65536 00:26:02.792 } 00:26:02.792 ] 00:26:02.792 }' 00:26:02.792 10:39:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:02.792 10:39:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:02.792 10:39:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@660 -- # break 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.050 "name": "raid_bdev1", 00:26:03.050 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:26:03.050 "strip_size_kb": 64, 00:26:03.050 "state": "online", 00:26:03.050 "raid_level": "raid5f", 00:26:03.050 "superblock": false, 00:26:03.050 "num_base_bdevs": 4, 00:26:03.050 "num_base_bdevs_discovered": 4, 00:26:03.050 "num_base_bdevs_operational": 4, 00:26:03.050 "base_bdevs_list": [ 00:26:03.050 { 00:26:03.050 "name": "spare", 00:26:03.050 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:26:03.050 "is_configured": true, 00:26:03.050 "data_offset": 0, 00:26:03.050 "data_size": 65536 00:26:03.050 }, 00:26:03.050 { 00:26:03.050 "name": "BaseBdev2", 00:26:03.050 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:26:03.050 "is_configured": true, 00:26:03.050 "data_offset": 0, 00:26:03.050 "data_size": 65536 00:26:03.050 }, 00:26:03.050 { 00:26:03.050 "name": "BaseBdev3", 00:26:03.050 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:26:03.050 "is_configured": true, 00:26:03.050 "data_offset": 0, 00:26:03.050 "data_size": 65536 00:26:03.050 }, 00:26:03.050 { 00:26:03.050 "name": "BaseBdev4", 00:26:03.050 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:26:03.050 "is_configured": true, 00:26:03.050 "data_offset": 0, 00:26:03.050 "data_size": 65536 00:26:03.050 } 00:26:03.050 ] 00:26:03.050 }' 00:26:03.050 10:39:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.308 10:39:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.308 10:39:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.308 "name": "raid_bdev1", 00:26:03.308 "uuid": "6a4aeba9-9a9f-441f-9457-7fa42ad5a417", 00:26:03.308 "strip_size_kb": 64, 00:26:03.308 "state": "online", 00:26:03.308 "raid_level": "raid5f", 00:26:03.308 "superblock": false, 00:26:03.308 "num_base_bdevs": 4, 00:26:03.308 "num_base_bdevs_discovered": 4, 00:26:03.308 "num_base_bdevs_operational": 4, 00:26:03.308 "base_bdevs_list": [ 00:26:03.308 { 00:26:03.308 "name": "spare", 00:26:03.308 "uuid": "d70a08cb-1be2-53a3-94ba-8d4ff24bbe11", 00:26:03.308 "is_configured": true, 00:26:03.308 "data_offset": 0, 00:26:03.308 "data_size": 65536 00:26:03.308 }, 00:26:03.308 { 00:26:03.308 "name": "BaseBdev2", 00:26:03.308 "uuid": "c86fd84b-8fb9-4fe7-a6f0-c59e85002531", 00:26:03.308 "is_configured": true, 00:26:03.308 "data_offset": 0, 00:26:03.308 "data_size": 65536 00:26:03.308 }, 00:26:03.308 { 00:26:03.308 "name": "BaseBdev3", 00:26:03.308 "uuid": "57ec11a2-ef5c-47ad-9be6-d9384407ba8c", 00:26:03.308 "is_configured": true, 00:26:03.308 "data_offset": 0, 00:26:03.308 "data_size": 65536 00:26:03.308 }, 00:26:03.308 { 00:26:03.308 "name": "BaseBdev4", 00:26:03.308 "uuid": "8e7991c5-5eb4-4102-9e51-57207c3eef2d", 00:26:03.308 "is_configured": true, 00:26:03.308 "data_offset": 0, 00:26:03.308 "data_size": 65536 00:26:03.308 } 00:26:03.308 ] 00:26:03.308 }' 00:26:03.308 10:39:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.308 10:39:57 -- common/autotest_common.sh@10 -- # set +x 00:26:04.261 10:39:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:04.261 [2024-07-12 10:39:58.127113] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:04.261 [2024-07-12 10:39:58.127144] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:04.261 [2024-07-12 10:39:58.127227] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.261 [2024-07-12 10:39:58.127311] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.261 [2024-07-12 10:39:58.127323] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:04.261 10:39:58 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.261 10:39:58 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:04.519 10:39:58 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:04.519 10:39:58 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:04.519 10:39:58 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@12 -- # local i 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:04.519 10:39:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:04.776 /dev/nbd0 00:26:04.776 10:39:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:04.776 10:39:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:04.776 10:39:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:04.776 10:39:58 -- common/autotest_common.sh@857 -- # local i 00:26:04.776 10:39:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:04.776 10:39:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:04.776 10:39:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:04.776 10:39:58 -- common/autotest_common.sh@861 -- # break 00:26:04.776 10:39:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:04.776 10:39:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:04.776 10:39:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:04.776 1+0 records in 00:26:04.776 1+0 records out 00:26:04.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507079 s, 8.1 MB/s 00:26:04.776 10:39:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:04.776 10:39:58 -- common/autotest_common.sh@874 -- # size=4096 00:26:04.776 10:39:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:04.776 10:39:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:04.776 10:39:58 -- common/autotest_common.sh@877 -- # return 0 00:26:04.776 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:04.776 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:04.776 10:39:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:05.034 /dev/nbd1 00:26:05.034 10:39:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:05.034 10:39:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:05.034 10:39:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:05.034 10:39:58 -- common/autotest_common.sh@857 -- # local i 00:26:05.034 10:39:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:05.034 10:39:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:05.034 10:39:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:05.034 10:39:58 -- common/autotest_common.sh@861 -- # break 00:26:05.034 10:39:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:05.034 10:39:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:05.034 10:39:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.034 1+0 records in 00:26:05.034 1+0 records out 00:26:05.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533477 s, 7.7 MB/s 00:26:05.034 10:39:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.034 10:39:58 -- common/autotest_common.sh@874 -- # size=4096 00:26:05.034 10:39:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.034 10:39:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:05.034 10:39:58 -- common/autotest_common.sh@877 -- # return 0 00:26:05.034 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.034 10:39:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.034 10:39:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:05.292 10:39:59 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@51 -- # local i 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:05.292 10:39:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@41 -- # break 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@45 -- # return 0 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:05.550 10:39:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@41 -- # break 00:26:05.808 10:39:59 -- bdev/nbd_common.sh@45 -- # return 0 00:26:05.808 10:39:59 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:26:05.808 10:39:59 -- bdev/bdev_raid.sh@709 -- # killprocess 134940 00:26:05.808 10:39:59 -- common/autotest_common.sh@926 -- # '[' -z 134940 ']' 00:26:05.808 10:39:59 -- common/autotest_common.sh@930 -- # kill -0 134940 00:26:05.808 10:39:59 -- common/autotest_common.sh@931 -- # uname 00:26:05.808 10:39:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:05.808 10:39:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134940 00:26:05.808 10:39:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:05.808 10:39:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:05.808 10:39:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134940' 00:26:05.808 killing process with pid 134940 00:26:05.808 10:39:59 -- common/autotest_common.sh@945 -- # kill 134940 00:26:05.808 Received shutdown signal, test time was about 60.000000 seconds 00:26:05.808 00:26:05.808 Latency(us) 00:26:05.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.808 =================================================================================================================== 00:26:05.808 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:05.808 [2024-07-12 10:39:59.627410] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.808 10:39:59 -- common/autotest_common.sh@950 -- # wait 134940 00:26:06.067 [2024-07-12 10:39:59.956883] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:07.441 ************************************ 00:26:07.441 END TEST raid5f_rebuild_test 00:26:07.441 ************************************ 00:26:07.441 10:40:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:07.441 00:26:07.441 real 0m24.881s 00:26:07.441 user 0m36.351s 00:26:07.441 sys 0m2.485s 00:26:07.441 10:40:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.441 10:40:00 -- common/autotest_common.sh@10 -- # set +x 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:26:07.441 10:40:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:07.441 10:40:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.441 10:40:01 -- common/autotest_common.sh@10 -- # set +x 00:26:07.441 ************************************ 00:26:07.441 START TEST raid5f_rebuild_test_sb 00:26:07.441 ************************************ 00:26:07.441 10:40:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=135603 00:26:07.441 10:40:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135603 /var/tmp/spdk-raid.sock 00:26:07.441 10:40:01 -- common/autotest_common.sh@819 -- # '[' -z 135603 ']' 00:26:07.441 10:40:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:07.441 10:40:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:07.441 10:40:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:07.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:07.442 10:40:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:07.442 10:40:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:07.442 10:40:01 -- common/autotest_common.sh@10 -- # set +x 00:26:07.442 [2024-07-12 10:40:01.105879] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:07.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:07.442 Zero copy mechanism will not be used. 00:26:07.442 [2024-07-12 10:40:01.106085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135603 ] 00:26:07.442 [2024-07-12 10:40:01.266931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.700 [2024-07-12 10:40:01.451956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.958 [2024-07-12 10:40:01.637792] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:08.216 10:40:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.216 10:40:02 -- common/autotest_common.sh@852 -- # return 0 00:26:08.216 10:40:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:08.216 10:40:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:08.216 10:40:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:08.473 BaseBdev1_malloc 00:26:08.473 10:40:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:08.732 [2024-07-12 10:40:02.440219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:08.732 [2024-07-12 10:40:02.440317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.732 [2024-07-12 10:40:02.440352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:08.732 [2024-07-12 10:40:02.440412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.732 [2024-07-12 10:40:02.442581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.732 [2024-07-12 10:40:02.442626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:08.732 BaseBdev1 00:26:08.732 10:40:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:08.732 10:40:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:08.732 10:40:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:08.990 BaseBdev2_malloc 00:26:08.990 10:40:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:08.990 [2024-07-12 10:40:02.847807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:08.990 [2024-07-12 10:40:02.847870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.990 [2024-07-12 10:40:02.847915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:08.990 [2024-07-12 10:40:02.847966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.990 [2024-07-12 10:40:02.850128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.990 [2024-07-12 10:40:02.850173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:08.990 BaseBdev2 00:26:08.990 10:40:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:08.990 10:40:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:08.990 10:40:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:09.247 BaseBdev3_malloc 00:26:09.247 10:40:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:09.505 [2024-07-12 10:40:03.232949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:09.505 [2024-07-12 10:40:03.233014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.505 [2024-07-12 10:40:03.233052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:09.505 [2024-07-12 10:40:03.233095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.505 [2024-07-12 10:40:03.235225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.505 [2024-07-12 10:40:03.235276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:09.505 BaseBdev3 00:26:09.505 10:40:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:09.505 10:40:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:09.505 10:40:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:09.763 BaseBdev4_malloc 00:26:09.763 10:40:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:09.763 [2024-07-12 10:40:03.614812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:09.763 [2024-07-12 10:40:03.614881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.763 [2024-07-12 10:40:03.614915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:09.763 [2024-07-12 10:40:03.614957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.763 [2024-07-12 10:40:03.617108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.763 [2024-07-12 10:40:03.617158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:09.763 BaseBdev4 00:26:09.763 10:40:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:10.021 spare_malloc 00:26:10.021 10:40:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:10.278 spare_delay 00:26:10.279 10:40:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:10.279 [2024-07-12 10:40:04.175889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:10.279 [2024-07-12 10:40:04.175954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.279 [2024-07-12 10:40:04.175984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:10.279 [2024-07-12 10:40:04.176025] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.279 [2024-07-12 10:40:04.178193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.279 [2024-07-12 10:40:04.178249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:10.279 spare 00:26:10.279 10:40:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:10.536 [2024-07-12 10:40:04.360014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:10.536 [2024-07-12 10:40:04.361848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:10.536 [2024-07-12 10:40:04.361926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:10.536 [2024-07-12 10:40:04.361981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:10.536 [2024-07-12 10:40:04.362177] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:26:10.536 [2024-07-12 10:40:04.362197] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:10.536 [2024-07-12 10:40:04.362309] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:10.536 [2024-07-12 10:40:04.367706] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:26:10.536 [2024-07-12 10:40:04.367730] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:26:10.536 [2024-07-12 10:40:04.367888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.536 10:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.793 10:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.793 "name": "raid_bdev1", 00:26:10.793 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:10.793 "strip_size_kb": 64, 00:26:10.793 "state": "online", 00:26:10.793 "raid_level": "raid5f", 00:26:10.793 "superblock": true, 00:26:10.793 "num_base_bdevs": 4, 00:26:10.793 "num_base_bdevs_discovered": 4, 00:26:10.793 "num_base_bdevs_operational": 4, 00:26:10.793 "base_bdevs_list": [ 00:26:10.793 { 00:26:10.793 "name": "BaseBdev1", 00:26:10.793 "uuid": "e62824a6-535e-50d5-9e2e-c582746032ff", 00:26:10.793 "is_configured": true, 00:26:10.793 "data_offset": 2048, 00:26:10.793 "data_size": 63488 00:26:10.793 }, 00:26:10.793 { 00:26:10.793 "name": "BaseBdev2", 00:26:10.793 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:10.793 "is_configured": true, 00:26:10.793 "data_offset": 2048, 00:26:10.793 "data_size": 63488 00:26:10.793 }, 00:26:10.793 { 00:26:10.793 "name": "BaseBdev3", 00:26:10.793 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:10.793 "is_configured": true, 00:26:10.793 "data_offset": 2048, 00:26:10.793 "data_size": 63488 00:26:10.793 }, 00:26:10.793 { 00:26:10.793 "name": "BaseBdev4", 00:26:10.794 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:10.794 "is_configured": true, 00:26:10.794 "data_offset": 2048, 00:26:10.794 "data_size": 63488 00:26:10.794 } 00:26:10.794 ] 00:26:10.794 }' 00:26:10.794 10:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.794 10:40:04 -- common/autotest_common.sh@10 -- # set +x 00:26:11.358 10:40:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:11.358 10:40:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:11.616 [2024-07-12 10:40:05.434287] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:11.616 10:40:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:26:11.616 10:40:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.616 10:40:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:11.874 10:40:05 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:26:11.874 10:40:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:11.874 10:40:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:11.874 10:40:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@12 -- # local i 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:11.874 10:40:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:12.132 [2024-07-12 10:40:05.842271] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:12.132 /dev/nbd0 00:26:12.132 10:40:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:12.132 10:40:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:12.132 10:40:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:12.132 10:40:05 -- common/autotest_common.sh@857 -- # local i 00:26:12.132 10:40:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:12.132 10:40:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:12.132 10:40:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:12.132 10:40:05 -- common/autotest_common.sh@861 -- # break 00:26:12.132 10:40:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:12.132 10:40:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:12.132 10:40:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:12.132 1+0 records in 00:26:12.132 1+0 records out 00:26:12.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184993 s, 22.1 MB/s 00:26:12.132 10:40:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:12.132 10:40:05 -- common/autotest_common.sh@874 -- # size=4096 00:26:12.132 10:40:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:12.132 10:40:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:12.132 10:40:05 -- common/autotest_common.sh@877 -- # return 0 00:26:12.132 10:40:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:12.132 10:40:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:12.132 10:40:05 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:12.132 10:40:05 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:12.132 10:40:05 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:12.132 10:40:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:12.697 496+0 records in 00:26:12.697 496+0 records out 00:26:12.697 97517568 bytes (98 MB, 93 MiB) copied, 0.563092 s, 173 MB/s 00:26:12.697 10:40:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@51 -- # local i 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:12.697 10:40:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:12.955 [2024-07-12 10:40:06.670588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@41 -- # break 00:26:12.955 10:40:06 -- bdev/nbd_common.sh@45 -- # return 0 00:26:12.955 10:40:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:13.212 [2024-07-12 10:40:06.933385] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.212 10:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.470 10:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.470 "name": "raid_bdev1", 00:26:13.470 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:13.470 "strip_size_kb": 64, 00:26:13.470 "state": "online", 00:26:13.470 "raid_level": "raid5f", 00:26:13.470 "superblock": true, 00:26:13.470 "num_base_bdevs": 4, 00:26:13.470 "num_base_bdevs_discovered": 3, 00:26:13.470 "num_base_bdevs_operational": 3, 00:26:13.470 "base_bdevs_list": [ 00:26:13.470 { 00:26:13.470 "name": null, 00:26:13.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.470 "is_configured": false, 00:26:13.470 "data_offset": 2048, 00:26:13.470 "data_size": 63488 00:26:13.470 }, 00:26:13.470 { 00:26:13.470 "name": "BaseBdev2", 00:26:13.470 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:13.470 "is_configured": true, 00:26:13.470 "data_offset": 2048, 00:26:13.470 "data_size": 63488 00:26:13.470 }, 00:26:13.470 { 00:26:13.470 "name": "BaseBdev3", 00:26:13.470 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:13.470 "is_configured": true, 00:26:13.470 "data_offset": 2048, 00:26:13.470 "data_size": 63488 00:26:13.470 }, 00:26:13.470 { 00:26:13.470 "name": "BaseBdev4", 00:26:13.470 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:13.470 "is_configured": true, 00:26:13.470 "data_offset": 2048, 00:26:13.470 "data_size": 63488 00:26:13.470 } 00:26:13.470 ] 00:26:13.470 }' 00:26:13.470 10:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.470 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:14.036 10:40:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:14.294 [2024-07-12 10:40:08.101568] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:14.294 [2024-07-12 10:40:08.101612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:14.294 [2024-07-12 10:40:08.111971] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c860 00:26:14.294 [2024-07-12 10:40:08.118969] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:14.294 10:40:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.228 10:40:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.486 10:40:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.486 "name": "raid_bdev1", 00:26:15.486 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:15.486 "strip_size_kb": 64, 00:26:15.486 "state": "online", 00:26:15.486 "raid_level": "raid5f", 00:26:15.486 "superblock": true, 00:26:15.486 "num_base_bdevs": 4, 00:26:15.486 "num_base_bdevs_discovered": 4, 00:26:15.486 "num_base_bdevs_operational": 4, 00:26:15.486 "process": { 00:26:15.486 "type": "rebuild", 00:26:15.486 "target": "spare", 00:26:15.486 "progress": { 00:26:15.486 "blocks": 23040, 00:26:15.486 "percent": 12 00:26:15.486 } 00:26:15.486 }, 00:26:15.486 "base_bdevs_list": [ 00:26:15.486 { 00:26:15.486 "name": "spare", 00:26:15.486 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:15.486 "is_configured": true, 00:26:15.486 "data_offset": 2048, 00:26:15.486 "data_size": 63488 00:26:15.486 }, 00:26:15.486 { 00:26:15.486 "name": "BaseBdev2", 00:26:15.486 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:15.486 "is_configured": true, 00:26:15.486 "data_offset": 2048, 00:26:15.486 "data_size": 63488 00:26:15.486 }, 00:26:15.486 { 00:26:15.486 "name": "BaseBdev3", 00:26:15.486 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:15.486 "is_configured": true, 00:26:15.486 "data_offset": 2048, 00:26:15.486 "data_size": 63488 00:26:15.486 }, 00:26:15.486 { 00:26:15.486 "name": "BaseBdev4", 00:26:15.486 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:15.486 "is_configured": true, 00:26:15.486 "data_offset": 2048, 00:26:15.486 "data_size": 63488 00:26:15.486 } 00:26:15.486 ] 00:26:15.486 }' 00:26:15.486 10:40:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.744 10:40:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.744 10:40:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.744 10:40:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.744 10:40:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:15.744 [2024-07-12 10:40:09.639816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:16.002 [2024-07-12 10:40:09.730692] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:16.002 [2024-07-12 10:40:09.730760] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.002 10:40:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.260 10:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.260 "name": "raid_bdev1", 00:26:16.260 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:16.260 "strip_size_kb": 64, 00:26:16.260 "state": "online", 00:26:16.260 "raid_level": "raid5f", 00:26:16.260 "superblock": true, 00:26:16.260 "num_base_bdevs": 4, 00:26:16.260 "num_base_bdevs_discovered": 3, 00:26:16.260 "num_base_bdevs_operational": 3, 00:26:16.260 "base_bdevs_list": [ 00:26:16.260 { 00:26:16.260 "name": null, 00:26:16.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.260 "is_configured": false, 00:26:16.260 "data_offset": 2048, 00:26:16.260 "data_size": 63488 00:26:16.260 }, 00:26:16.260 { 00:26:16.260 "name": "BaseBdev2", 00:26:16.260 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:16.260 "is_configured": true, 00:26:16.260 "data_offset": 2048, 00:26:16.260 "data_size": 63488 00:26:16.260 }, 00:26:16.260 { 00:26:16.260 "name": "BaseBdev3", 00:26:16.260 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:16.260 "is_configured": true, 00:26:16.260 "data_offset": 2048, 00:26:16.260 "data_size": 63488 00:26:16.260 }, 00:26:16.260 { 00:26:16.260 "name": "BaseBdev4", 00:26:16.260 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:16.260 "is_configured": true, 00:26:16.260 "data_offset": 2048, 00:26:16.260 "data_size": 63488 00:26:16.260 } 00:26:16.260 ] 00:26:16.260 }' 00:26:16.260 10:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.260 10:40:09 -- common/autotest_common.sh@10 -- # set +x 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.823 10:40:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.081 10:40:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:17.081 "name": "raid_bdev1", 00:26:17.081 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:17.081 "strip_size_kb": 64, 00:26:17.081 "state": "online", 00:26:17.081 "raid_level": "raid5f", 00:26:17.081 "superblock": true, 00:26:17.081 "num_base_bdevs": 4, 00:26:17.081 "num_base_bdevs_discovered": 3, 00:26:17.081 "num_base_bdevs_operational": 3, 00:26:17.081 "base_bdevs_list": [ 00:26:17.081 { 00:26:17.081 "name": null, 00:26:17.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.081 "is_configured": false, 00:26:17.081 "data_offset": 2048, 00:26:17.081 "data_size": 63488 00:26:17.081 }, 00:26:17.081 { 00:26:17.081 "name": "BaseBdev2", 00:26:17.081 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:17.081 "is_configured": true, 00:26:17.081 "data_offset": 2048, 00:26:17.081 "data_size": 63488 00:26:17.081 }, 00:26:17.081 { 00:26:17.081 "name": "BaseBdev3", 00:26:17.081 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:17.081 "is_configured": true, 00:26:17.081 "data_offset": 2048, 00:26:17.081 "data_size": 63488 00:26:17.081 }, 00:26:17.081 { 00:26:17.081 "name": "BaseBdev4", 00:26:17.081 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:17.081 "is_configured": true, 00:26:17.081 "data_offset": 2048, 00:26:17.081 "data_size": 63488 00:26:17.081 } 00:26:17.081 ] 00:26:17.081 }' 00:26:17.081 10:40:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:17.081 10:40:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:17.081 10:40:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:17.340 10:40:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:17.340 10:40:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:17.340 [2024-07-12 10:40:11.249530] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:17.340 [2024-07-12 10:40:11.249570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:17.598 [2024-07-12 10:40:11.259593] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ca00 00:26:17.598 [2024-07-12 10:40:11.266633] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:17.598 10:40:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.533 10:40:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.792 10:40:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.792 "name": "raid_bdev1", 00:26:18.792 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:18.792 "strip_size_kb": 64, 00:26:18.792 "state": "online", 00:26:18.792 "raid_level": "raid5f", 00:26:18.792 "superblock": true, 00:26:18.792 "num_base_bdevs": 4, 00:26:18.792 "num_base_bdevs_discovered": 4, 00:26:18.792 "num_base_bdevs_operational": 4, 00:26:18.792 "process": { 00:26:18.792 "type": "rebuild", 00:26:18.792 "target": "spare", 00:26:18.792 "progress": { 00:26:18.792 "blocks": 23040, 00:26:18.792 "percent": 12 00:26:18.792 } 00:26:18.792 }, 00:26:18.792 "base_bdevs_list": [ 00:26:18.792 { 00:26:18.792 "name": "spare", 00:26:18.793 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:18.793 "is_configured": true, 00:26:18.793 "data_offset": 2048, 00:26:18.793 "data_size": 63488 00:26:18.793 }, 00:26:18.793 { 00:26:18.793 "name": "BaseBdev2", 00:26:18.793 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:18.793 "is_configured": true, 00:26:18.793 "data_offset": 2048, 00:26:18.793 "data_size": 63488 00:26:18.793 }, 00:26:18.793 { 00:26:18.793 "name": "BaseBdev3", 00:26:18.793 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:18.793 "is_configured": true, 00:26:18.793 "data_offset": 2048, 00:26:18.793 "data_size": 63488 00:26:18.793 }, 00:26:18.793 { 00:26:18.793 "name": "BaseBdev4", 00:26:18.793 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:18.793 "is_configured": true, 00:26:18.793 "data_offset": 2048, 00:26:18.793 "data_size": 63488 00:26:18.793 } 00:26:18.793 ] 00:26:18.793 }' 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:18.793 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@657 -- # local timeout=725 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.793 10:40:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.051 10:40:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:19.051 "name": "raid_bdev1", 00:26:19.051 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:19.051 "strip_size_kb": 64, 00:26:19.051 "state": "online", 00:26:19.051 "raid_level": "raid5f", 00:26:19.051 "superblock": true, 00:26:19.051 "num_base_bdevs": 4, 00:26:19.051 "num_base_bdevs_discovered": 4, 00:26:19.051 "num_base_bdevs_operational": 4, 00:26:19.051 "process": { 00:26:19.051 "type": "rebuild", 00:26:19.051 "target": "spare", 00:26:19.051 "progress": { 00:26:19.051 "blocks": 28800, 00:26:19.051 "percent": 15 00:26:19.051 } 00:26:19.051 }, 00:26:19.051 "base_bdevs_list": [ 00:26:19.051 { 00:26:19.051 "name": "spare", 00:26:19.051 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:19.051 "is_configured": true, 00:26:19.051 "data_offset": 2048, 00:26:19.051 "data_size": 63488 00:26:19.051 }, 00:26:19.051 { 00:26:19.051 "name": "BaseBdev2", 00:26:19.051 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:19.051 "is_configured": true, 00:26:19.051 "data_offset": 2048, 00:26:19.051 "data_size": 63488 00:26:19.051 }, 00:26:19.051 { 00:26:19.051 "name": "BaseBdev3", 00:26:19.051 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:19.051 "is_configured": true, 00:26:19.051 "data_offset": 2048, 00:26:19.051 "data_size": 63488 00:26:19.051 }, 00:26:19.051 { 00:26:19.051 "name": "BaseBdev4", 00:26:19.051 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:19.052 "is_configured": true, 00:26:19.052 "data_offset": 2048, 00:26:19.052 "data_size": 63488 00:26:19.052 } 00:26:19.052 ] 00:26:19.052 }' 00:26:19.052 10:40:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:19.052 10:40:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.052 10:40:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:19.318 10:40:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.318 10:40:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.251 10:40:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:20.509 "name": "raid_bdev1", 00:26:20.509 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:20.509 "strip_size_kb": 64, 00:26:20.509 "state": "online", 00:26:20.509 "raid_level": "raid5f", 00:26:20.509 "superblock": true, 00:26:20.509 "num_base_bdevs": 4, 00:26:20.509 "num_base_bdevs_discovered": 4, 00:26:20.509 "num_base_bdevs_operational": 4, 00:26:20.509 "process": { 00:26:20.509 "type": "rebuild", 00:26:20.509 "target": "spare", 00:26:20.509 "progress": { 00:26:20.509 "blocks": 55680, 00:26:20.509 "percent": 29 00:26:20.509 } 00:26:20.509 }, 00:26:20.509 "base_bdevs_list": [ 00:26:20.509 { 00:26:20.509 "name": "spare", 00:26:20.509 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 2048, 00:26:20.509 "data_size": 63488 00:26:20.509 }, 00:26:20.509 { 00:26:20.509 "name": "BaseBdev2", 00:26:20.509 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 2048, 00:26:20.509 "data_size": 63488 00:26:20.509 }, 00:26:20.509 { 00:26:20.509 "name": "BaseBdev3", 00:26:20.509 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 2048, 00:26:20.509 "data_size": 63488 00:26:20.509 }, 00:26:20.509 { 00:26:20.509 "name": "BaseBdev4", 00:26:20.509 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 2048, 00:26:20.509 "data_size": 63488 00:26:20.509 } 00:26:20.509 ] 00:26:20.509 }' 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:20.509 10:40:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.443 10:40:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.701 10:40:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.701 "name": "raid_bdev1", 00:26:21.701 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:21.701 "strip_size_kb": 64, 00:26:21.701 "state": "online", 00:26:21.701 "raid_level": "raid5f", 00:26:21.701 "superblock": true, 00:26:21.701 "num_base_bdevs": 4, 00:26:21.701 "num_base_bdevs_discovered": 4, 00:26:21.701 "num_base_bdevs_operational": 4, 00:26:21.701 "process": { 00:26:21.701 "type": "rebuild", 00:26:21.701 "target": "spare", 00:26:21.701 "progress": { 00:26:21.701 "blocks": 80640, 00:26:21.701 "percent": 42 00:26:21.701 } 00:26:21.701 }, 00:26:21.702 "base_bdevs_list": [ 00:26:21.702 { 00:26:21.702 "name": "spare", 00:26:21.702 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:21.702 "is_configured": true, 00:26:21.702 "data_offset": 2048, 00:26:21.702 "data_size": 63488 00:26:21.702 }, 00:26:21.702 { 00:26:21.702 "name": "BaseBdev2", 00:26:21.702 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:21.702 "is_configured": true, 00:26:21.702 "data_offset": 2048, 00:26:21.702 "data_size": 63488 00:26:21.702 }, 00:26:21.702 { 00:26:21.702 "name": "BaseBdev3", 00:26:21.702 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:21.702 "is_configured": true, 00:26:21.702 "data_offset": 2048, 00:26:21.702 "data_size": 63488 00:26:21.702 }, 00:26:21.702 { 00:26:21.702 "name": "BaseBdev4", 00:26:21.702 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:21.702 "is_configured": true, 00:26:21.702 "data_offset": 2048, 00:26:21.702 "data_size": 63488 00:26:21.702 } 00:26:21.702 ] 00:26:21.702 }' 00:26:21.702 10:40:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.702 10:40:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:21.702 10:40:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.960 10:40:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:21.960 10:40:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.895 10:40:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.153 10:40:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:23.153 "name": "raid_bdev1", 00:26:23.153 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:23.153 "strip_size_kb": 64, 00:26:23.153 "state": "online", 00:26:23.153 "raid_level": "raid5f", 00:26:23.153 "superblock": true, 00:26:23.153 "num_base_bdevs": 4, 00:26:23.153 "num_base_bdevs_discovered": 4, 00:26:23.153 "num_base_bdevs_operational": 4, 00:26:23.153 "process": { 00:26:23.153 "type": "rebuild", 00:26:23.153 "target": "spare", 00:26:23.153 "progress": { 00:26:23.153 "blocks": 105600, 00:26:23.153 "percent": 55 00:26:23.153 } 00:26:23.153 }, 00:26:23.153 "base_bdevs_list": [ 00:26:23.153 { 00:26:23.153 "name": "spare", 00:26:23.153 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:23.153 "is_configured": true, 00:26:23.153 "data_offset": 2048, 00:26:23.153 "data_size": 63488 00:26:23.153 }, 00:26:23.153 { 00:26:23.153 "name": "BaseBdev2", 00:26:23.153 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:23.153 "is_configured": true, 00:26:23.153 "data_offset": 2048, 00:26:23.153 "data_size": 63488 00:26:23.153 }, 00:26:23.153 { 00:26:23.153 "name": "BaseBdev3", 00:26:23.153 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:23.153 "is_configured": true, 00:26:23.153 "data_offset": 2048, 00:26:23.153 "data_size": 63488 00:26:23.153 }, 00:26:23.153 { 00:26:23.153 "name": "BaseBdev4", 00:26:23.153 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:23.153 "is_configured": true, 00:26:23.153 "data_offset": 2048, 00:26:23.153 "data_size": 63488 00:26:23.153 } 00:26:23.153 ] 00:26:23.153 }' 00:26:23.153 10:40:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:23.153 10:40:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:23.153 10:40:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:23.153 10:40:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:23.153 10:40:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:24.527 "name": "raid_bdev1", 00:26:24.527 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:24.527 "strip_size_kb": 64, 00:26:24.527 "state": "online", 00:26:24.527 "raid_level": "raid5f", 00:26:24.527 "superblock": true, 00:26:24.527 "num_base_bdevs": 4, 00:26:24.527 "num_base_bdevs_discovered": 4, 00:26:24.527 "num_base_bdevs_operational": 4, 00:26:24.527 "process": { 00:26:24.527 "type": "rebuild", 00:26:24.527 "target": "spare", 00:26:24.527 "progress": { 00:26:24.527 "blocks": 132480, 00:26:24.527 "percent": 69 00:26:24.527 } 00:26:24.527 }, 00:26:24.527 "base_bdevs_list": [ 00:26:24.527 { 00:26:24.527 "name": "spare", 00:26:24.527 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:24.527 "is_configured": true, 00:26:24.527 "data_offset": 2048, 00:26:24.527 "data_size": 63488 00:26:24.527 }, 00:26:24.527 { 00:26:24.527 "name": "BaseBdev2", 00:26:24.527 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:24.527 "is_configured": true, 00:26:24.527 "data_offset": 2048, 00:26:24.527 "data_size": 63488 00:26:24.527 }, 00:26:24.527 { 00:26:24.527 "name": "BaseBdev3", 00:26:24.527 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:24.527 "is_configured": true, 00:26:24.527 "data_offset": 2048, 00:26:24.527 "data_size": 63488 00:26:24.527 }, 00:26:24.527 { 00:26:24.527 "name": "BaseBdev4", 00:26:24.527 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:24.527 "is_configured": true, 00:26:24.527 "data_offset": 2048, 00:26:24.527 "data_size": 63488 00:26:24.527 } 00:26:24.527 ] 00:26:24.527 }' 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:24.527 10:40:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.900 10:40:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:25.900 "name": "raid_bdev1", 00:26:25.900 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:25.900 "strip_size_kb": 64, 00:26:25.900 "state": "online", 00:26:25.900 "raid_level": "raid5f", 00:26:25.900 "superblock": true, 00:26:25.900 "num_base_bdevs": 4, 00:26:25.900 "num_base_bdevs_discovered": 4, 00:26:25.900 "num_base_bdevs_operational": 4, 00:26:25.900 "process": { 00:26:25.900 "type": "rebuild", 00:26:25.900 "target": "spare", 00:26:25.900 "progress": { 00:26:25.900 "blocks": 157440, 00:26:25.900 "percent": 82 00:26:25.900 } 00:26:25.900 }, 00:26:25.900 "base_bdevs_list": [ 00:26:25.900 { 00:26:25.900 "name": "spare", 00:26:25.900 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:25.900 "is_configured": true, 00:26:25.900 "data_offset": 2048, 00:26:25.900 "data_size": 63488 00:26:25.900 }, 00:26:25.900 { 00:26:25.900 "name": "BaseBdev2", 00:26:25.900 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:25.900 "is_configured": true, 00:26:25.900 "data_offset": 2048, 00:26:25.900 "data_size": 63488 00:26:25.900 }, 00:26:25.900 { 00:26:25.900 "name": "BaseBdev3", 00:26:25.900 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:25.900 "is_configured": true, 00:26:25.900 "data_offset": 2048, 00:26:25.900 "data_size": 63488 00:26:25.900 }, 00:26:25.900 { 00:26:25.900 "name": "BaseBdev4", 00:26:25.900 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:25.900 "is_configured": true, 00:26:25.900 "data_offset": 2048, 00:26:25.901 "data_size": 63488 00:26:25.901 } 00:26:25.901 ] 00:26:25.901 }' 00:26:25.901 10:40:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:25.901 10:40:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:25.901 10:40:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:25.901 10:40:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:25.901 10:40:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:27.276 "name": "raid_bdev1", 00:26:27.276 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:27.276 "strip_size_kb": 64, 00:26:27.276 "state": "online", 00:26:27.276 "raid_level": "raid5f", 00:26:27.276 "superblock": true, 00:26:27.276 "num_base_bdevs": 4, 00:26:27.276 "num_base_bdevs_discovered": 4, 00:26:27.276 "num_base_bdevs_operational": 4, 00:26:27.276 "process": { 00:26:27.276 "type": "rebuild", 00:26:27.276 "target": "spare", 00:26:27.276 "progress": { 00:26:27.276 "blocks": 184320, 00:26:27.276 "percent": 96 00:26:27.276 } 00:26:27.276 }, 00:26:27.276 "base_bdevs_list": [ 00:26:27.276 { 00:26:27.276 "name": "spare", 00:26:27.276 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:27.276 "is_configured": true, 00:26:27.276 "data_offset": 2048, 00:26:27.276 "data_size": 63488 00:26:27.276 }, 00:26:27.276 { 00:26:27.276 "name": "BaseBdev2", 00:26:27.276 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:27.276 "is_configured": true, 00:26:27.276 "data_offset": 2048, 00:26:27.276 "data_size": 63488 00:26:27.276 }, 00:26:27.276 { 00:26:27.276 "name": "BaseBdev3", 00:26:27.276 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:27.276 "is_configured": true, 00:26:27.276 "data_offset": 2048, 00:26:27.276 "data_size": 63488 00:26:27.276 }, 00:26:27.276 { 00:26:27.276 "name": "BaseBdev4", 00:26:27.276 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:27.276 "is_configured": true, 00:26:27.276 "data_offset": 2048, 00:26:27.276 "data_size": 63488 00:26:27.276 } 00:26:27.276 ] 00:26:27.276 }' 00:26:27.276 10:40:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:27.276 10:40:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.276 10:40:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:27.276 10:40:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.276 10:40:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:27.635 [2024-07-12 10:40:21.337073] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:27.635 [2024-07-12 10:40:21.337159] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:27.635 [2024-07-12 10:40:21.337321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.240 10:40:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.508 10:40:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:28.508 "name": "raid_bdev1", 00:26:28.508 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:28.508 "strip_size_kb": 64, 00:26:28.508 "state": "online", 00:26:28.508 "raid_level": "raid5f", 00:26:28.508 "superblock": true, 00:26:28.508 "num_base_bdevs": 4, 00:26:28.508 "num_base_bdevs_discovered": 4, 00:26:28.508 "num_base_bdevs_operational": 4, 00:26:28.508 "base_bdevs_list": [ 00:26:28.508 { 00:26:28.508 "name": "spare", 00:26:28.508 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:28.508 "is_configured": true, 00:26:28.508 "data_offset": 2048, 00:26:28.508 "data_size": 63488 00:26:28.508 }, 00:26:28.508 { 00:26:28.508 "name": "BaseBdev2", 00:26:28.508 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:28.508 "is_configured": true, 00:26:28.508 "data_offset": 2048, 00:26:28.508 "data_size": 63488 00:26:28.508 }, 00:26:28.508 { 00:26:28.508 "name": "BaseBdev3", 00:26:28.508 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:28.508 "is_configured": true, 00:26:28.508 "data_offset": 2048, 00:26:28.508 "data_size": 63488 00:26:28.508 }, 00:26:28.508 { 00:26:28.508 "name": "BaseBdev4", 00:26:28.508 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:28.508 "is_configured": true, 00:26:28.508 "data_offset": 2048, 00:26:28.508 "data_size": 63488 00:26:28.508 } 00:26:28.508 ] 00:26:28.508 }' 00:26:28.508 10:40:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:28.508 10:40:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:28.508 10:40:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@660 -- # break 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:28.767 "name": "raid_bdev1", 00:26:28.767 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:28.767 "strip_size_kb": 64, 00:26:28.767 "state": "online", 00:26:28.767 "raid_level": "raid5f", 00:26:28.767 "superblock": true, 00:26:28.767 "num_base_bdevs": 4, 00:26:28.767 "num_base_bdevs_discovered": 4, 00:26:28.767 "num_base_bdevs_operational": 4, 00:26:28.767 "base_bdevs_list": [ 00:26:28.767 { 00:26:28.767 "name": "spare", 00:26:28.767 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:28.767 "is_configured": true, 00:26:28.767 "data_offset": 2048, 00:26:28.767 "data_size": 63488 00:26:28.767 }, 00:26:28.767 { 00:26:28.767 "name": "BaseBdev2", 00:26:28.767 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:28.767 "is_configured": true, 00:26:28.767 "data_offset": 2048, 00:26:28.767 "data_size": 63488 00:26:28.767 }, 00:26:28.767 { 00:26:28.767 "name": "BaseBdev3", 00:26:28.767 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:28.767 "is_configured": true, 00:26:28.767 "data_offset": 2048, 00:26:28.767 "data_size": 63488 00:26:28.767 }, 00:26:28.767 { 00:26:28.767 "name": "BaseBdev4", 00:26:28.767 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:28.767 "is_configured": true, 00:26:28.767 "data_offset": 2048, 00:26:28.767 "data_size": 63488 00:26:28.767 } 00:26:28.767 ] 00:26:28.767 }' 00:26:28.767 10:40:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.026 10:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.286 10:40:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.286 "name": "raid_bdev1", 00:26:29.286 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:29.286 "strip_size_kb": 64, 00:26:29.286 "state": "online", 00:26:29.286 "raid_level": "raid5f", 00:26:29.286 "superblock": true, 00:26:29.286 "num_base_bdevs": 4, 00:26:29.286 "num_base_bdevs_discovered": 4, 00:26:29.286 "num_base_bdevs_operational": 4, 00:26:29.286 "base_bdevs_list": [ 00:26:29.286 { 00:26:29.286 "name": "spare", 00:26:29.286 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:29.286 "is_configured": true, 00:26:29.286 "data_offset": 2048, 00:26:29.286 "data_size": 63488 00:26:29.286 }, 00:26:29.286 { 00:26:29.286 "name": "BaseBdev2", 00:26:29.286 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:29.286 "is_configured": true, 00:26:29.286 "data_offset": 2048, 00:26:29.286 "data_size": 63488 00:26:29.286 }, 00:26:29.286 { 00:26:29.286 "name": "BaseBdev3", 00:26:29.286 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:29.286 "is_configured": true, 00:26:29.286 "data_offset": 2048, 00:26:29.286 "data_size": 63488 00:26:29.286 }, 00:26:29.286 { 00:26:29.286 "name": "BaseBdev4", 00:26:29.286 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:29.286 "is_configured": true, 00:26:29.286 "data_offset": 2048, 00:26:29.286 "data_size": 63488 00:26:29.286 } 00:26:29.286 ] 00:26:29.286 }' 00:26:29.286 10:40:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.286 10:40:23 -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 10:40:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:30.109 [2024-07-12 10:40:23.821397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:30.109 [2024-07-12 10:40:23.821444] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:30.109 [2024-07-12 10:40:23.821534] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:30.109 [2024-07-12 10:40:23.821651] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:30.109 [2024-07-12 10:40:23.821666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:26:30.109 10:40:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.109 10:40:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:30.109 10:40:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:30.109 10:40:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:30.109 10:40:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@12 -- # local i 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.109 10:40:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:30.367 /dev/nbd0 00:26:30.367 10:40:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.367 10:40:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.367 10:40:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:30.367 10:40:24 -- common/autotest_common.sh@857 -- # local i 00:26:30.367 10:40:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:30.367 10:40:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:30.367 10:40:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:30.367 10:40:24 -- common/autotest_common.sh@861 -- # break 00:26:30.367 10:40:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:30.367 10:40:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:30.367 10:40:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.367 1+0 records in 00:26:30.367 1+0 records out 00:26:30.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624289 s, 6.6 MB/s 00:26:30.367 10:40:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.367 10:40:24 -- common/autotest_common.sh@874 -- # size=4096 00:26:30.367 10:40:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.367 10:40:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:30.367 10:40:24 -- common/autotest_common.sh@877 -- # return 0 00:26:30.367 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.367 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.367 10:40:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:30.625 /dev/nbd1 00:26:30.625 10:40:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:30.625 10:40:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:30.625 10:40:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:30.625 10:40:24 -- common/autotest_common.sh@857 -- # local i 00:26:30.625 10:40:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:30.625 10:40:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:30.625 10:40:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:30.625 10:40:24 -- common/autotest_common.sh@861 -- # break 00:26:30.625 10:40:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:30.625 10:40:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:30.625 10:40:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.625 1+0 records in 00:26:30.625 1+0 records out 00:26:30.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033943 s, 12.1 MB/s 00:26:30.625 10:40:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.625 10:40:24 -- common/autotest_common.sh@874 -- # size=4096 00:26:30.625 10:40:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.625 10:40:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:30.625 10:40:24 -- common/autotest_common.sh@877 -- # return 0 00:26:30.625 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.625 10:40:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.625 10:40:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:30.883 10:40:24 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@51 -- # local i 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:30.883 10:40:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@41 -- # break 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.141 10:40:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:31.141 10:40:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:31.399 10:40:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:31.399 10:40:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.399 10:40:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:31.399 10:40:25 -- bdev/nbd_common.sh@41 -- # break 00:26:31.399 10:40:25 -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.399 10:40:25 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:31.399 10:40:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:31.399 10:40:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:31.399 10:40:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:31.657 10:40:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:31.657 [2024-07-12 10:40:25.487299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:31.657 [2024-07-12 10:40:25.487384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.657 [2024-07-12 10:40:25.487424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:31.657 [2024-07-12 10:40:25.487446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.657 [2024-07-12 10:40:25.489784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.657 [2024-07-12 10:40:25.489844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:31.657 [2024-07-12 10:40:25.489941] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:31.657 [2024-07-12 10:40:25.489994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:31.657 BaseBdev1 00:26:31.657 10:40:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:31.657 10:40:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:31.657 10:40:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:31.915 10:40:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:32.174 [2024-07-12 10:40:25.963378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:32.174 [2024-07-12 10:40:25.963434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.174 [2024-07-12 10:40:25.963470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:32.174 [2024-07-12 10:40:25.963491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.174 [2024-07-12 10:40:25.963851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.174 [2024-07-12 10:40:25.963906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:32.174 [2024-07-12 10:40:25.963986] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:32.174 [2024-07-12 10:40:25.963999] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:32.174 [2024-07-12 10:40:25.964005] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.174 [2024-07-12 10:40:25.964021] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:26:32.174 [2024-07-12 10:40:25.964096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:32.174 BaseBdev2 00:26:32.174 10:40:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:32.174 10:40:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:32.174 10:40:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:32.433 10:40:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:32.433 [2024-07-12 10:40:26.323527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:32.433 [2024-07-12 10:40:26.323585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.433 [2024-07-12 10:40:26.323612] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:32.433 [2024-07-12 10:40:26.323636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.433 [2024-07-12 10:40:26.324014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.433 [2024-07-12 10:40:26.324071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:32.433 [2024-07-12 10:40:26.324161] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:32.433 [2024-07-12 10:40:26.324183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.433 BaseBdev3 00:26:32.433 10:40:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:32.433 10:40:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:32.433 10:40:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:32.691 10:40:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:32.949 [2024-07-12 10:40:26.675603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:32.949 [2024-07-12 10:40:26.675660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.949 [2024-07-12 10:40:26.675688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:32.949 [2024-07-12 10:40:26.675726] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.949 [2024-07-12 10:40:26.676129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.949 [2024-07-12 10:40:26.676187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:32.949 [2024-07-12 10:40:26.676274] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:32.949 [2024-07-12 10:40:26.676308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:32.949 BaseBdev4 00:26:32.949 10:40:26 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:33.207 10:40:26 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:33.207 [2024-07-12 10:40:27.036932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:33.207 [2024-07-12 10:40:27.036994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.207 [2024-07-12 10:40:27.037022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:33.207 [2024-07-12 10:40:27.037048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.207 [2024-07-12 10:40:27.037462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.207 [2024-07-12 10:40:27.037517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:33.207 [2024-07-12 10:40:27.037609] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:33.207 [2024-07-12 10:40:27.037633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:33.207 spare 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:33.207 10:40:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.208 10:40:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.466 [2024-07-12 10:40:27.137749] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:26:33.466 [2024-07-12 10:40:27.137772] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:33.466 [2024-07-12 10:40:27.137893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d7b0 00:26:33.466 [2024-07-12 10:40:27.143008] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:26:33.466 [2024-07-12 10:40:27.143031] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:26:33.466 [2024-07-12 10:40:27.143172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.466 10:40:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.466 "name": "raid_bdev1", 00:26:33.466 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:33.466 "strip_size_kb": 64, 00:26:33.466 "state": "online", 00:26:33.466 "raid_level": "raid5f", 00:26:33.466 "superblock": true, 00:26:33.466 "num_base_bdevs": 4, 00:26:33.466 "num_base_bdevs_discovered": 4, 00:26:33.466 "num_base_bdevs_operational": 4, 00:26:33.466 "base_bdevs_list": [ 00:26:33.466 { 00:26:33.466 "name": "spare", 00:26:33.466 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:33.466 "is_configured": true, 00:26:33.466 "data_offset": 2048, 00:26:33.466 "data_size": 63488 00:26:33.466 }, 00:26:33.466 { 00:26:33.466 "name": "BaseBdev2", 00:26:33.466 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:33.466 "is_configured": true, 00:26:33.466 "data_offset": 2048, 00:26:33.466 "data_size": 63488 00:26:33.466 }, 00:26:33.466 { 00:26:33.466 "name": "BaseBdev3", 00:26:33.466 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:33.466 "is_configured": true, 00:26:33.466 "data_offset": 2048, 00:26:33.466 "data_size": 63488 00:26:33.466 }, 00:26:33.466 { 00:26:33.466 "name": "BaseBdev4", 00:26:33.466 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:33.466 "is_configured": true, 00:26:33.466 "data_offset": 2048, 00:26:33.466 "data_size": 63488 00:26:33.466 } 00:26:33.466 ] 00:26:33.466 }' 00:26:33.466 10:40:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.466 10:40:27 -- common/autotest_common.sh@10 -- # set +x 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.399 10:40:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.399 10:40:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:34.399 "name": "raid_bdev1", 00:26:34.399 "uuid": "9464d1ca-a828-4bf7-b609-7fccb21f896f", 00:26:34.399 "strip_size_kb": 64, 00:26:34.399 "state": "online", 00:26:34.399 "raid_level": "raid5f", 00:26:34.400 "superblock": true, 00:26:34.400 "num_base_bdevs": 4, 00:26:34.400 "num_base_bdevs_discovered": 4, 00:26:34.400 "num_base_bdevs_operational": 4, 00:26:34.400 "base_bdevs_list": [ 00:26:34.400 { 00:26:34.400 "name": "spare", 00:26:34.400 "uuid": "07bddaea-8bbd-5ee9-a9ab-cdf458114424", 00:26:34.400 "is_configured": true, 00:26:34.400 "data_offset": 2048, 00:26:34.400 "data_size": 63488 00:26:34.400 }, 00:26:34.400 { 00:26:34.400 "name": "BaseBdev2", 00:26:34.400 "uuid": "ac6c908b-aefe-5ea0-b121-08cbad5f0536", 00:26:34.400 "is_configured": true, 00:26:34.400 "data_offset": 2048, 00:26:34.400 "data_size": 63488 00:26:34.400 }, 00:26:34.400 { 00:26:34.400 "name": "BaseBdev3", 00:26:34.400 "uuid": "428e53b1-f020-567a-9317-735b4baf2b82", 00:26:34.400 "is_configured": true, 00:26:34.400 "data_offset": 2048, 00:26:34.400 "data_size": 63488 00:26:34.400 }, 00:26:34.400 { 00:26:34.400 "name": "BaseBdev4", 00:26:34.400 "uuid": "9257df6c-c778-53da-bcca-12d1ba6e9f1b", 00:26:34.400 "is_configured": true, 00:26:34.400 "data_offset": 2048, 00:26:34.400 "data_size": 63488 00:26:34.400 } 00:26:34.400 ] 00:26:34.400 }' 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.400 10:40:28 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:34.658 10:40:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.658 10:40:28 -- bdev/bdev_raid.sh@709 -- # killprocess 135603 00:26:34.658 10:40:28 -- common/autotest_common.sh@926 -- # '[' -z 135603 ']' 00:26:34.658 10:40:28 -- common/autotest_common.sh@930 -- # kill -0 135603 00:26:34.658 10:40:28 -- common/autotest_common.sh@931 -- # uname 00:26:34.658 10:40:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:34.658 10:40:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135603 00:26:34.658 killing process with pid 135603 00:26:34.658 Received shutdown signal, test time was about 60.000000 seconds 00:26:34.658 00:26:34.658 Latency(us) 00:26:34.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.658 =================================================================================================================== 00:26:34.658 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:34.658 10:40:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:34.658 10:40:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:34.658 10:40:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135603' 00:26:34.658 10:40:28 -- common/autotest_common.sh@945 -- # kill 135603 00:26:34.658 10:40:28 -- common/autotest_common.sh@950 -- # wait 135603 00:26:34.658 [2024-07-12 10:40:28.482385] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.658 [2024-07-12 10:40:28.482446] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.658 [2024-07-12 10:40:28.482516] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.658 [2024-07-12 10:40:28.482526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:26:34.916 [2024-07-12 10:40:28.811997] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.292 ************************************ 00:26:36.292 END TEST raid5f_rebuild_test_sb 00:26:36.292 ************************************ 00:26:36.292 10:40:29 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:36.292 00:26:36.292 real 0m28.787s 00:26:36.292 user 0m43.967s 00:26:36.292 sys 0m2.783s 00:26:36.292 10:40:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.292 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:26:36.292 10:40:29 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:36.292 00:26:36.292 real 11m52.411s 00:26:36.292 user 19m47.181s 00:26:36.292 sys 1m23.499s 00:26:36.292 10:40:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.292 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:26:36.292 ************************************ 00:26:36.292 END TEST bdev_raid 00:26:36.292 ************************************ 00:26:36.292 10:40:29 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:36.292 10:40:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:36.292 10:40:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.292 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:26:36.292 ************************************ 00:26:36.292 START TEST bdevperf_config 00:26:36.292 ************************************ 00:26:36.292 10:40:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:36.292 * Looking for test storage... 00:26:36.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:36.292 10:40:29 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:36.292 10:40:29 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:36.292 10:40:29 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:36.292 10:40:29 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:36.292 10:40:29 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:36.292 10:40:29 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:36.292 10:40:29 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:36.292 10:40:29 -- bdevperf/common.sh@9 -- # local rw=read 00:26:36.292 10:40:29 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:36.292 10:40:29 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:36.292 10:40:29 -- bdevperf/common.sh@13 -- # cat 00:26:36.292 10:40:29 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:36.292 10:40:29 -- bdevperf/common.sh@19 -- # echo 00:26:36.292 00:26:36.292 10:40:29 -- bdevperf/common.sh@20 -- # cat 00:26:36.292 10:40:30 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:36.292 10:40:30 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:36.292 10:40:30 -- bdevperf/common.sh@9 -- # local rw= 00:26:36.292 00:26:36.292 10:40:30 -- bdevperf/common.sh@10 -- # local filename= 00:26:36.292 10:40:30 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:36.292 10:40:30 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:36.292 10:40:30 -- bdevperf/common.sh@19 -- # echo 00:26:36.292 10:40:30 -- bdevperf/common.sh@20 -- # cat 00:26:36.292 10:40:30 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:36.292 10:40:30 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:36.292 10:40:30 -- bdevperf/common.sh@9 -- # local rw= 00:26:36.292 10:40:30 -- bdevperf/common.sh@10 -- # local filename= 00:26:36.292 00:26:36.292 10:40:30 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:36.292 10:40:30 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:36.292 10:40:30 -- bdevperf/common.sh@19 -- # echo 00:26:36.292 10:40:30 -- bdevperf/common.sh@20 -- # cat 00:26:36.292 10:40:30 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:36.292 10:40:30 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:36.292 10:40:30 -- bdevperf/common.sh@9 -- # local rw= 00:26:36.292 10:40:30 -- bdevperf/common.sh@10 -- # local filename= 00:26:36.292 10:40:30 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:36.292 00:26:36.292 10:40:30 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:36.292 10:40:30 -- bdevperf/common.sh@19 -- # echo 00:26:36.292 10:40:30 -- bdevperf/common.sh@20 -- # cat 00:26:36.292 10:40:30 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:36.292 10:40:30 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:36.292 10:40:30 -- bdevperf/common.sh@9 -- # local rw= 00:26:36.292 10:40:30 -- bdevperf/common.sh@10 -- # local filename= 00:26:36.292 10:40:30 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:36.292 10:40:30 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:36.292 00:26:36.292 10:40:30 -- bdevperf/common.sh@19 -- # echo 00:26:36.292 10:40:30 -- bdevperf/common.sh@20 -- # cat 00:26:36.292 10:40:30 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:40.469 10:40:34 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-12 10:40:30.071135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.469 [2024-07-12 10:40:30.071652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:26:40.469 Using job config with 4 jobs 00:26:40.469 [2024-07-12 10:40:30.222707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.469 [2024-07-12 10:40:30.428336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.469 cpumask for '\''job0'\'' is too big 00:26:40.469 cpumask for '\''job1'\'' is too big 00:26:40.469 cpumask for '\''job2'\'' is too big 00:26:40.469 cpumask for '\''job3'\'' is too big 00:26:40.469 Running I/O for 2 seconds... 00:26:40.469 00:26:40.469 Latency(us) 00:26:40.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.469 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.469 Malloc0 : 2.01 33316.49 32.54 0.00 0.00 7677.79 1437.32 12094.37 00:26:40.469 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.469 Malloc0 : 2.01 33293.85 32.51 0.00 0.00 7670.34 1362.85 10724.07 00:26:40.469 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.469 Malloc0 : 2.02 33271.24 32.49 0.00 0.00 7663.30 1429.88 9234.62 00:26:40.469 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.469 Malloc0 : 2.02 33249.65 32.47 0.00 0.00 7655.29 1422.43 8698.41 00:26:40.469 =================================================================================================================== 00:26:40.469 Total : 133131.25 130.01 0.00 0.00 7666.68 1362.85 12094.37' 00:26:40.469 10:40:34 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-12 10:40:30.071135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.469 [2024-07-12 10:40:30.071652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:26:40.469 Using job config with 4 jobs 00:26:40.469 [2024-07-12 10:40:30.222707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.469 [2024-07-12 10:40:30.428336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.469 cpumask for '\''job0'\'' is too big 00:26:40.469 cpumask for '\''job1'\'' is too big 00:26:40.469 cpumask for '\''job2'\'' is too big 00:26:40.469 cpumask for '\''job3'\'' is too big 00:26:40.469 Running I/O for 2 seconds... 00:26:40.469 00:26:40.470 Latency(us) 00:26:40.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.01 33316.49 32.54 0.00 0.00 7677.79 1437.32 12094.37 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.01 33293.85 32.51 0.00 0.00 7670.34 1362.85 10724.07 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.02 33271.24 32.49 0.00 0.00 7663.30 1429.88 9234.62 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.02 33249.65 32.47 0.00 0.00 7655.29 1422.43 8698.41 00:26:40.470 =================================================================================================================== 00:26:40.470 Total : 133131.25 130.01 0.00 0.00 7666.68 1362.85 12094.37' 00:26:40.470 10:40:34 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:40.470 10:40:34 -- bdevperf/common.sh@32 -- # echo '[2024-07-12 10:40:30.071135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.470 [2024-07-12 10:40:30.071652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136413 ] 00:26:40.470 Using job config with 4 jobs 00:26:40.470 [2024-07-12 10:40:30.222707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.470 [2024-07-12 10:40:30.428336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.470 cpumask for '\''job0'\'' is too big 00:26:40.470 cpumask for '\''job1'\'' is too big 00:26:40.470 cpumask for '\''job2'\'' is too big 00:26:40.470 cpumask for '\''job3'\'' is too big 00:26:40.470 Running I/O for 2 seconds... 00:26:40.470 00:26:40.470 Latency(us) 00:26:40.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.01 33316.49 32.54 0.00 0.00 7677.79 1437.32 12094.37 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.01 33293.85 32.51 0.00 0.00 7670.34 1362.85 10724.07 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.02 33271.24 32.49 0.00 0.00 7663.30 1429.88 9234.62 00:26:40.470 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:40.470 Malloc0 : 2.02 33249.65 32.47 0.00 0.00 7655.29 1422.43 8698.41 00:26:40.470 =================================================================================================================== 00:26:40.470 Total : 133131.25 130.01 0.00 0.00 7666.68 1362.85 12094.37' 00:26:40.470 10:40:34 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:40.470 10:40:34 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:40.470 10:40:34 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:40.470 [2024-07-12 10:40:34.184938] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.470 [2024-07-12 10:40:34.185154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136474 ] 00:26:40.470 [2024-07-12 10:40:34.351668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.729 [2024-07-12 10:40:34.558356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.296 cpumask for 'job0' is too big 00:26:41.296 cpumask for 'job1' is too big 00:26:41.296 cpumask for 'job2' is too big 00:26:41.296 cpumask for 'job3' is too big 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:44.581 Running I/O for 2 seconds... 00:26:44.581 00:26:44.581 Latency(us) 00:26:44.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.581 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:44.581 Malloc0 : 2.02 32137.52 31.38 0.00 0.00 7962.18 1444.77 16086.11 00:26:44.581 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:44.581 Malloc0 : 2.02 32115.99 31.36 0.00 0.00 7953.04 1362.85 17515.99 00:26:44.581 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:44.581 Malloc0 : 2.02 32095.26 31.34 0.00 0.00 7944.15 1444.77 18826.71 00:26:44.581 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:44.581 Malloc0 : 2.02 32074.32 31.32 0.00 0.00 7937.24 1459.67 18945.86 00:26:44.581 =================================================================================================================== 00:26:44.581 Total : 128423.09 125.41 0.00 0.00 7949.15 1362.85 18945.86' 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:44.581 10:40:38 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:44.581 00:26:44.581 10:40:38 -- bdevperf/common.sh@9 -- # local rw=write 00:26:44.581 10:40:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:44.581 10:40:38 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:44.581 10:40:38 -- bdevperf/common.sh@19 -- # echo 00:26:44.581 10:40:38 -- bdevperf/common.sh@20 -- # cat 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:44.581 10:40:38 -- bdevperf/common.sh@9 -- # local rw=write 00:26:44.581 00:26:44.581 10:40:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:44.581 10:40:38 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:44.581 10:40:38 -- bdevperf/common.sh@19 -- # echo 00:26:44.581 10:40:38 -- bdevperf/common.sh@20 -- # cat 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:44.581 00:26:44.581 10:40:38 -- bdevperf/common.sh@9 -- # local rw=write 00:26:44.581 10:40:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:44.581 10:40:38 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:44.581 10:40:38 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:44.581 10:40:38 -- bdevperf/common.sh@19 -- # echo 00:26:44.581 10:40:38 -- bdevperf/common.sh@20 -- # cat 00:26:44.581 10:40:38 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:48.768 10:40:42 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-12 10:40:38.225057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:48.768 [2024-07-12 10:40:38.225206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136543 ] 00:26:48.768 Using job config with 3 jobs 00:26:48.768 [2024-07-12 10:40:38.372788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.768 [2024-07-12 10:40:38.545356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.768 cpumask for '\''job0'\'' is too big 00:26:48.768 cpumask for '\''job1'\'' is too big 00:26:48.768 cpumask for '\''job2'\'' is too big 00:26:48.768 Running I/O for 2 seconds... 00:26:48.768 00:26:48.768 Latency(us) 00:26:48.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44636.72 43.59 0.00 0.00 5729.17 1407.53 8519.68 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44649.14 43.60 0.00 0.00 5718.46 1370.30 7149.38 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44620.08 43.57 0.00 0.00 5712.67 1377.75 7060.01 00:26:48.768 =================================================================================================================== 00:26:48.768 Total : 133905.94 130.77 0.00 0.00 5720.09 1370.30 8519.68' 00:26:48.768 10:40:42 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-12 10:40:38.225057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:48.768 [2024-07-12 10:40:38.225206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136543 ] 00:26:48.768 Using job config with 3 jobs 00:26:48.768 [2024-07-12 10:40:38.372788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.768 [2024-07-12 10:40:38.545356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.768 cpumask for '\''job0'\'' is too big 00:26:48.768 cpumask for '\''job1'\'' is too big 00:26:48.768 cpumask for '\''job2'\'' is too big 00:26:48.768 Running I/O for 2 seconds... 00:26:48.768 00:26:48.768 Latency(us) 00:26:48.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44636.72 43.59 0.00 0.00 5729.17 1407.53 8519.68 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44649.14 43.60 0.00 0.00 5718.46 1370.30 7149.38 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44620.08 43.57 0.00 0.00 5712.67 1377.75 7060.01 00:26:48.768 =================================================================================================================== 00:26:48.768 Total : 133905.94 130.77 0.00 0.00 5720.09 1370.30 8519.68' 00:26:48.768 10:40:42 -- bdevperf/common.sh@32 -- # echo '[2024-07-12 10:40:38.225057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:48.768 [2024-07-12 10:40:38.225206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136543 ] 00:26:48.768 Using job config with 3 jobs 00:26:48.768 [2024-07-12 10:40:38.372788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.768 [2024-07-12 10:40:38.545356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.768 cpumask for '\''job0'\'' is too big 00:26:48.768 cpumask for '\''job1'\'' is too big 00:26:48.768 cpumask for '\''job2'\'' is too big 00:26:48.768 Running I/O for 2 seconds... 00:26:48.768 00:26:48.768 Latency(us) 00:26:48.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44636.72 43.59 0.00 0.00 5729.17 1407.53 8519.68 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44649.14 43.60 0.00 0.00 5718.46 1370.30 7149.38 00:26:48.768 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:48.768 Malloc0 : 2.01 44620.08 43.57 0.00 0.00 5712.67 1377.75 7060.01 00:26:48.768 =================================================================================================================== 00:26:48.768 Total : 133905.94 130.77 0.00 0.00 5720.09 1370.30 8519.68' 00:26:48.768 10:40:42 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:48.768 10:40:42 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:48.768 10:40:42 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:48.768 10:40:42 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:48.768 10:40:42 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:48.768 10:40:42 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:48.768 10:40:42 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:48.768 10:40:42 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:48.768 10:40:42 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:48.768 10:40:42 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:48.768 10:40:42 -- bdevperf/common.sh@13 -- # cat 00:26:48.768 10:40:42 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:48.768 00:26:48.768 10:40:42 -- bdevperf/common.sh@19 -- # echo 00:26:48.769 10:40:42 -- bdevperf/common.sh@20 -- # cat 00:26:48.769 10:40:42 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:48.769 10:40:42 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:48.769 10:40:42 -- bdevperf/common.sh@9 -- # local rw= 00:26:48.769 10:40:42 -- bdevperf/common.sh@10 -- # local filename= 00:26:48.769 10:40:42 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:48.769 00:26:48.769 10:40:42 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:48.769 10:40:42 -- bdevperf/common.sh@19 -- # echo 00:26:48.769 10:40:42 -- bdevperf/common.sh@20 -- # cat 00:26:48.769 10:40:42 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:48.769 10:40:42 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:48.769 10:40:42 -- bdevperf/common.sh@9 -- # local rw= 00:26:48.769 10:40:42 -- bdevperf/common.sh@10 -- # local filename= 00:26:48.769 00:26:48.769 10:40:42 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:48.769 10:40:42 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:48.769 10:40:42 -- bdevperf/common.sh@19 -- # echo 00:26:48.769 10:40:42 -- bdevperf/common.sh@20 -- # cat 00:26:48.769 10:40:42 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:48.769 10:40:42 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:48.769 10:40:42 -- bdevperf/common.sh@9 -- # local rw= 00:26:48.769 10:40:42 -- bdevperf/common.sh@10 -- # local filename= 00:26:48.769 00:26:48.769 10:40:42 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:48.769 10:40:42 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:48.769 10:40:42 -- bdevperf/common.sh@19 -- # echo 00:26:48.769 10:40:42 -- bdevperf/common.sh@20 -- # cat 00:26:48.769 10:40:42 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:48.769 10:40:42 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:48.769 10:40:42 -- bdevperf/common.sh@9 -- # local rw= 00:26:48.769 10:40:42 -- bdevperf/common.sh@10 -- # local filename= 00:26:48.769 10:40:42 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:48.769 10:40:42 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:48.769 10:40:42 -- bdevperf/common.sh@19 -- # echo 00:26:48.769 00:26:48.769 10:40:42 -- bdevperf/common.sh@20 -- # cat 00:26:48.769 10:40:42 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:52.955 10:40:46 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-12 10:40:42.155301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.955 [2024-07-12 10:40:42.155520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136601 ] 00:26:52.955 Using job config with 4 jobs 00:26:52.955 [2024-07-12 10:40:42.306730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.955 [2024-07-12 10:40:42.495452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.955 cpumask for '\''job0'\'' is too big 00:26:52.955 cpumask for '\''job1'\'' is too big 00:26:52.955 cpumask for '\''job2'\'' is too big 00:26:52.955 cpumask for '\''job3'\'' is too big 00:26:52.955 Running I/O for 2 seconds... 00:26:52.955 00:26:52.955 Latency(us) 00:26:52.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.02 16492.01 16.11 0.00 0.00 15512.94 2978.91 24069.59 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.03 16491.90 16.11 0.00 0.00 15500.17 3395.96 24069.59 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.03 16481.32 16.10 0.00 0.00 15472.05 2800.17 21209.83 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16470.38 16.08 0.00 0.00 15469.60 3336.38 21209.83 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.04 16458.55 16.07 0.00 0.00 15441.36 2815.07 18350.08 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16447.64 16.06 0.00 0.00 15438.98 3351.27 18350.08 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.04 16437.10 16.05 0.00 0.00 15411.21 2829.96 17277.67 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16426.06 16.04 0.00 0.00 15409.30 3351.27 17158.52 00:26:52.955 =================================================================================================================== 00:26:52.955 Total : 131704.97 128.62 0.00 0.00 15456.90 2800.17 24069.59' 00:26:52.955 10:40:46 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-12 10:40:42.155301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.955 [2024-07-12 10:40:42.155520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136601 ] 00:26:52.955 Using job config with 4 jobs 00:26:52.955 [2024-07-12 10:40:42.306730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.955 [2024-07-12 10:40:42.495452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.955 cpumask for '\''job0'\'' is too big 00:26:52.955 cpumask for '\''job1'\'' is too big 00:26:52.955 cpumask for '\''job2'\'' is too big 00:26:52.955 cpumask for '\''job3'\'' is too big 00:26:52.955 Running I/O for 2 seconds... 00:26:52.955 00:26:52.955 Latency(us) 00:26:52.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.02 16492.01 16.11 0.00 0.00 15512.94 2978.91 24069.59 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.03 16491.90 16.11 0.00 0.00 15500.17 3395.96 24069.59 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.03 16481.32 16.10 0.00 0.00 15472.05 2800.17 21209.83 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16470.38 16.08 0.00 0.00 15469.60 3336.38 21209.83 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.04 16458.55 16.07 0.00 0.00 15441.36 2815.07 18350.08 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16447.64 16.06 0.00 0.00 15438.98 3351.27 18350.08 00:26:52.955 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc0 : 2.04 16437.10 16.05 0.00 0.00 15411.21 2829.96 17277.67 00:26:52.955 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.955 Malloc1 : 2.04 16426.06 16.04 0.00 0.00 15409.30 3351.27 17158.52 00:26:52.955 =================================================================================================================== 00:26:52.955 Total : 131704.97 128.62 0.00 0.00 15456.90 2800.17 24069.59' 00:26:52.955 10:40:46 -- bdevperf/common.sh@32 -- # echo '[2024-07-12 10:40:42.155301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.955 [2024-07-12 10:40:42.155520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136601 ] 00:26:52.955 Using job config with 4 jobs 00:26:52.955 [2024-07-12 10:40:42.306730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.955 [2024-07-12 10:40:42.495452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.955 cpumask for '\''job0'\'' is too big 00:26:52.955 cpumask for '\''job1'\'' is too big 00:26:52.955 cpumask for '\''job2'\'' is too big 00:26:52.955 cpumask for '\''job3'\'' is too big 00:26:52.955 Running I/O for 2 seconds... 00:26:52.955 00:26:52.956 Latency(us) 00:26:52.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.956 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc0 : 2.02 16492.01 16.11 0.00 0.00 15512.94 2978.91 24069.59 00:26:52.956 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc1 : 2.03 16491.90 16.11 0.00 0.00 15500.17 3395.96 24069.59 00:26:52.956 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc0 : 2.03 16481.32 16.10 0.00 0.00 15472.05 2800.17 21209.83 00:26:52.956 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc1 : 2.04 16470.38 16.08 0.00 0.00 15469.60 3336.38 21209.83 00:26:52.956 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc0 : 2.04 16458.55 16.07 0.00 0.00 15441.36 2815.07 18350.08 00:26:52.956 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc1 : 2.04 16447.64 16.06 0.00 0.00 15438.98 3351.27 18350.08 00:26:52.956 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc0 : 2.04 16437.10 16.05 0.00 0.00 15411.21 2829.96 17277.67 00:26:52.956 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:52.956 Malloc1 : 2.04 16426.06 16.04 0.00 0.00 15409.30 3351.27 17158.52 00:26:52.956 =================================================================================================================== 00:26:52.956 Total : 131704.97 128.62 0.00 0.00 15456.90 2800.17 24069.59' 00:26:52.956 10:40:46 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:52.956 10:40:46 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:52.956 10:40:46 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:52.956 10:40:46 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:52.956 10:40:46 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:52.956 10:40:46 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:52.956 00:26:52.956 real 0m16.163s 00:26:52.956 user 0m14.442s 00:26:52.956 sys 0m1.135s 00:26:52.956 10:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.956 10:40:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.956 ************************************ 00:26:52.956 END TEST bdevperf_config 00:26:52.956 ************************************ 00:26:52.956 10:40:46 -- spdk/autotest.sh@198 -- # uname -s 00:26:52.956 10:40:46 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:52.956 10:40:46 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:52.956 10:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:52.956 10:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:52.956 10:40:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.956 ************************************ 00:26:52.956 START TEST reactor_set_interrupt 00:26:52.956 ************************************ 00:26:52.956 10:40:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:52.956 * Looking for test storage... 00:26:52.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.956 10:40:46 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:52.956 10:40:46 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:52.956 10:40:46 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:52.956 10:40:46 -- common/autotest_common.sh@34 -- # set -e 00:26:52.956 10:40:46 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:52.956 10:40:46 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:52.956 10:40:46 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:52.956 10:40:46 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:52.956 10:40:46 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:52.956 10:40:46 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:52.956 10:40:46 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:52.956 10:40:46 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:52.956 10:40:46 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:52.956 10:40:46 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:52.956 10:40:46 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:52.956 10:40:46 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:52.956 10:40:46 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:52.956 10:40:46 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:52.956 10:40:46 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:52.956 10:40:46 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:52.956 10:40:46 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:52.956 10:40:46 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:52.956 10:40:46 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:52.956 10:40:46 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:52.956 10:40:46 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:52.956 10:40:46 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:52.956 10:40:46 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:52.956 10:40:46 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:52.956 10:40:46 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:52.956 10:40:46 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:52.956 10:40:46 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:52.956 10:40:46 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:52.956 10:40:46 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:52.956 10:40:46 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:52.956 10:40:46 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:52.956 10:40:46 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:52.956 10:40:46 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:52.956 10:40:46 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:52.956 10:40:46 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:52.956 10:40:46 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:52.956 10:40:46 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:52.956 10:40:46 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:52.956 10:40:46 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:52.956 10:40:46 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:52.956 10:40:46 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:52.956 10:40:46 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:52.956 10:40:46 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:52.956 10:40:46 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:52.956 10:40:46 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:52.956 10:40:46 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:52.956 10:40:46 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:52.956 10:40:46 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:52.956 10:40:46 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:52.956 10:40:46 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:52.956 10:40:46 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:52.956 10:40:46 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:52.956 10:40:46 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:52.956 10:40:46 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:52.956 10:40:46 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:52.956 10:40:46 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:52.956 10:40:46 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:52.956 10:40:46 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:52.956 10:40:46 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:52.956 10:40:46 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:52.956 10:40:46 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:52.956 10:40:46 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:52.956 10:40:46 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:52.956 10:40:46 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:52.956 10:40:46 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:52.956 10:40:46 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:52.956 10:40:46 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:52.956 10:40:46 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:52.956 10:40:46 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:52.956 10:40:46 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:52.956 10:40:46 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:52.956 10:40:46 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:52.956 10:40:46 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:52.956 10:40:46 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:52.956 10:40:46 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:52.956 10:40:46 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:52.956 10:40:46 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:52.956 10:40:46 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:52.956 10:40:46 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:52.956 10:40:46 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:52.956 10:40:46 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:52.956 10:40:46 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:52.956 10:40:46 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:52.956 10:40:46 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:52.956 10:40:46 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:52.956 10:40:46 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:52.956 10:40:46 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:52.956 10:40:46 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:52.956 10:40:46 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:52.957 10:40:46 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:52.957 10:40:46 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:52.957 10:40:46 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:52.957 10:40:46 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:52.957 #define SPDK_CONFIG_H 00:26:52.957 #define SPDK_CONFIG_APPS 1 00:26:52.957 #define SPDK_CONFIG_ARCH native 00:26:52.957 #define SPDK_CONFIG_ASAN 1 00:26:52.957 #undef SPDK_CONFIG_AVAHI 00:26:52.957 #undef SPDK_CONFIG_CET 00:26:52.957 #define SPDK_CONFIG_COVERAGE 1 00:26:52.957 #define SPDK_CONFIG_CROSS_PREFIX 00:26:52.957 #undef SPDK_CONFIG_CRYPTO 00:26:52.957 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:52.957 #undef SPDK_CONFIG_CUSTOMOCF 00:26:52.957 #undef SPDK_CONFIG_DAOS 00:26:52.957 #define SPDK_CONFIG_DAOS_DIR 00:26:52.957 #define SPDK_CONFIG_DEBUG 1 00:26:52.957 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:52.957 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:52.957 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:52.957 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:52.957 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:52.957 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:52.957 #define SPDK_CONFIG_EXAMPLES 1 00:26:52.957 #undef SPDK_CONFIG_FC 00:26:52.957 #define SPDK_CONFIG_FC_PATH 00:26:52.957 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:52.957 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:52.957 #undef SPDK_CONFIG_FUSE 00:26:52.957 #undef SPDK_CONFIG_FUZZER 00:26:52.957 #define SPDK_CONFIG_FUZZER_LIB 00:26:52.957 #undef SPDK_CONFIG_GOLANG 00:26:52.957 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:52.957 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:52.957 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:52.957 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:52.957 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:52.957 #define SPDK_CONFIG_IDXD 1 00:26:52.957 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:52.957 #undef SPDK_CONFIG_IPSEC_MB 00:26:52.957 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:52.957 #define SPDK_CONFIG_ISAL 1 00:26:52.957 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:52.957 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:52.957 #define SPDK_CONFIG_LIBDIR 00:26:52.957 #undef SPDK_CONFIG_LTO 00:26:52.957 #define SPDK_CONFIG_MAX_LCORES 00:26:52.957 #define SPDK_CONFIG_NVME_CUSE 1 00:26:52.957 #undef SPDK_CONFIG_OCF 00:26:52.957 #define SPDK_CONFIG_OCF_PATH 00:26:52.957 #define SPDK_CONFIG_OPENSSL_PATH 00:26:52.957 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:52.957 #undef SPDK_CONFIG_PGO_USE 00:26:52.957 #define SPDK_CONFIG_PREFIX /usr/local 00:26:52.957 #define SPDK_CONFIG_RAID5F 1 00:26:52.957 #undef SPDK_CONFIG_RBD 00:26:52.957 #define SPDK_CONFIG_RDMA 1 00:26:52.957 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:52.957 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:52.957 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:52.957 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:52.957 #undef SPDK_CONFIG_SHARED 00:26:52.957 #undef SPDK_CONFIG_SMA 00:26:52.957 #define SPDK_CONFIG_TESTS 1 00:26:52.957 #undef SPDK_CONFIG_TSAN 00:26:52.957 #undef SPDK_CONFIG_UBLK 00:26:52.957 #define SPDK_CONFIG_UBSAN 1 00:26:52.957 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:52.957 #undef SPDK_CONFIG_URING 00:26:52.957 #define SPDK_CONFIG_URING_PATH 00:26:52.957 #undef SPDK_CONFIG_URING_ZNS 00:26:52.957 #undef SPDK_CONFIG_USDT 00:26:52.957 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:52.957 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:52.957 #undef SPDK_CONFIG_VFIO_USER 00:26:52.957 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:52.957 #define SPDK_CONFIG_VHOST 1 00:26:52.957 #define SPDK_CONFIG_VIRTIO 1 00:26:52.957 #undef SPDK_CONFIG_VTUNE 00:26:52.957 #define SPDK_CONFIG_VTUNE_DIR 00:26:52.957 #define SPDK_CONFIG_WERROR 1 00:26:52.957 #define SPDK_CONFIG_WPDK_DIR 00:26:52.957 #undef SPDK_CONFIG_XNVME 00:26:52.957 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:52.957 10:40:46 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:52.957 10:40:46 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:52.957 10:40:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.957 10:40:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.957 10:40:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.957 10:40:46 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.957 10:40:46 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.957 10:40:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.957 10:40:46 -- paths/export.sh@5 -- # export PATH 00:26:52.957 10:40:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.957 10:40:46 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:52.957 10:40:46 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:52.957 10:40:46 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:52.957 10:40:46 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:52.957 10:40:46 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:52.957 10:40:46 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:52.957 10:40:46 -- pm/common@16 -- # TEST_TAG=N/A 00:26:52.957 10:40:46 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:52.957 10:40:46 -- common/autotest_common.sh@52 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:52.957 10:40:46 -- common/autotest_common.sh@56 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:52.957 10:40:46 -- common/autotest_common.sh@58 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:52.957 10:40:46 -- common/autotest_common.sh@60 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:52.957 10:40:46 -- common/autotest_common.sh@62 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:52.957 10:40:46 -- common/autotest_common.sh@64 -- # : 00:26:52.957 10:40:46 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:52.957 10:40:46 -- common/autotest_common.sh@66 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:52.957 10:40:46 -- common/autotest_common.sh@68 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:52.957 10:40:46 -- common/autotest_common.sh@70 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:52.957 10:40:46 -- common/autotest_common.sh@72 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:52.957 10:40:46 -- common/autotest_common.sh@74 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:52.957 10:40:46 -- common/autotest_common.sh@76 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:52.957 10:40:46 -- common/autotest_common.sh@78 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:52.957 10:40:46 -- common/autotest_common.sh@80 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:52.957 10:40:46 -- common/autotest_common.sh@82 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:52.957 10:40:46 -- common/autotest_common.sh@84 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:52.957 10:40:46 -- common/autotest_common.sh@86 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:52.957 10:40:46 -- common/autotest_common.sh@88 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:52.957 10:40:46 -- common/autotest_common.sh@90 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:52.957 10:40:46 -- common/autotest_common.sh@92 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:52.957 10:40:46 -- common/autotest_common.sh@94 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:52.957 10:40:46 -- common/autotest_common.sh@96 -- # : rdma 00:26:52.957 10:40:46 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:52.957 10:40:46 -- common/autotest_common.sh@98 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:52.957 10:40:46 -- common/autotest_common.sh@100 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:52.957 10:40:46 -- common/autotest_common.sh@102 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:52.957 10:40:46 -- common/autotest_common.sh@104 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:52.957 10:40:46 -- common/autotest_common.sh@106 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:52.957 10:40:46 -- common/autotest_common.sh@108 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:52.957 10:40:46 -- common/autotest_common.sh@110 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:52.957 10:40:46 -- common/autotest_common.sh@112 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:52.957 10:40:46 -- common/autotest_common.sh@114 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:52.957 10:40:46 -- common/autotest_common.sh@116 -- # : 1 00:26:52.957 10:40:46 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:52.957 10:40:46 -- common/autotest_common.sh@118 -- # : 00:26:52.957 10:40:46 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:52.957 10:40:46 -- common/autotest_common.sh@120 -- # : 0 00:26:52.957 10:40:46 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:52.958 10:40:46 -- common/autotest_common.sh@122 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:52.958 10:40:46 -- common/autotest_common.sh@124 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:52.958 10:40:46 -- common/autotest_common.sh@126 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:52.958 10:40:46 -- common/autotest_common.sh@128 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:52.958 10:40:46 -- common/autotest_common.sh@130 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:52.958 10:40:46 -- common/autotest_common.sh@132 -- # : 00:26:52.958 10:40:46 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:52.958 10:40:46 -- common/autotest_common.sh@134 -- # : true 00:26:52.958 10:40:46 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:52.958 10:40:46 -- common/autotest_common.sh@136 -- # : 1 00:26:52.958 10:40:46 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:52.958 10:40:46 -- common/autotest_common.sh@138 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:52.958 10:40:46 -- common/autotest_common.sh@140 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:52.958 10:40:46 -- common/autotest_common.sh@142 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:52.958 10:40:46 -- common/autotest_common.sh@144 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:52.958 10:40:46 -- common/autotest_common.sh@146 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:52.958 10:40:46 -- common/autotest_common.sh@148 -- # : 00:26:52.958 10:40:46 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:52.958 10:40:46 -- common/autotest_common.sh@150 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:52.958 10:40:46 -- common/autotest_common.sh@152 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:52.958 10:40:46 -- common/autotest_common.sh@154 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:52.958 10:40:46 -- common/autotest_common.sh@156 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:52.958 10:40:46 -- common/autotest_common.sh@158 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:52.958 10:40:46 -- common/autotest_common.sh@160 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:52.958 10:40:46 -- common/autotest_common.sh@163 -- # : 00:26:52.958 10:40:46 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:52.958 10:40:46 -- common/autotest_common.sh@165 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:52.958 10:40:46 -- common/autotest_common.sh@167 -- # : 0 00:26:52.958 10:40:46 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:52.958 10:40:46 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:52.958 10:40:46 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:52.958 10:40:46 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:52.958 10:40:46 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:52.958 10:40:46 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:52.958 10:40:46 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:52.958 10:40:46 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:52.958 10:40:46 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:52.958 10:40:46 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:52.958 10:40:46 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:52.958 10:40:46 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:52.958 10:40:46 -- common/autotest_common.sh@196 -- # cat 00:26:52.958 10:40:46 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:52.958 10:40:46 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:52.958 10:40:46 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:52.958 10:40:46 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:52.958 10:40:46 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:52.958 10:40:46 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:52.958 10:40:46 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:52.958 10:40:46 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:52.958 10:40:46 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:52.958 10:40:46 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:52.958 10:40:46 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:52.958 10:40:46 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:52.958 10:40:46 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:52.958 10:40:46 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:52.958 10:40:46 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:52.958 10:40:46 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:52.958 10:40:46 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:52.958 10:40:46 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:52.958 10:40:46 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:52.958 10:40:46 -- common/autotest_common.sh@249 -- # valgrind= 00:26:52.958 10:40:46 -- common/autotest_common.sh@255 -- # uname -s 00:26:52.958 10:40:46 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:52.958 10:40:46 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:52.958 10:40:46 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:52.958 10:40:46 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:52.958 10:40:46 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:52.958 10:40:46 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:52.958 10:40:46 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:52.958 10:40:46 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:52.958 10:40:46 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:52.958 10:40:46 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:52.958 10:40:46 -- common/autotest_common.sh@309 -- # [[ -z 136692 ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@309 -- # kill -0 136692 00:26:52.958 10:40:46 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:52.958 10:40:46 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:52.958 10:40:46 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:52.958 10:40:46 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:52.958 10:40:46 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:52.958 10:40:46 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:52.958 10:40:46 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:52.958 10:40:46 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.VUbOrz 00:26:52.958 10:40:46 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:52.958 10:40:46 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:52.958 10:40:46 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.VUbOrz/tests/interrupt /tmp/spdk.VUbOrz 00:26:52.958 10:40:46 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:52.958 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.958 10:40:46 -- common/autotest_common.sh@318 -- # df -T 00:26:52.958 10:40:46 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:52.958 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:52.958 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:52.958 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:26:52.958 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:26:52.958 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:52.958 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.958 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:52.958 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:52.958 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:52.958 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:52.958 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:52.958 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.958 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616156160 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983860736 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269964288 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=98674257920 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=1028521984 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:52.959 10:40:46 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:52.959 10:40:46 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:52.959 10:40:46 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:52.959 10:40:46 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:52.959 10:40:46 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:52.959 * Looking for test storage... 00:26:52.959 10:40:46 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:52.959 10:40:46 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:52.959 10:40:46 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.959 10:40:46 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:52.959 10:40:46 -- common/autotest_common.sh@363 -- # mount=/ 00:26:52.959 10:40:46 -- common/autotest_common.sh@365 -- # target_space=10616156160 00:26:52.959 10:40:46 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:52.959 10:40:46 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:52.959 10:40:46 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:52.959 10:40:46 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:52.959 10:40:46 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:52.959 10:40:46 -- common/autotest_common.sh@372 -- # new_size=12198453248 00:26:52.959 10:40:46 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:52.959 10:40:46 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.959 10:40:46 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.959 10:40:46 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:52.959 10:40:46 -- common/autotest_common.sh@380 -- # return 0 00:26:52.959 10:40:46 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:52.959 10:40:46 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:52.959 10:40:46 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:52.959 10:40:46 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:52.959 10:40:46 -- common/autotest_common.sh@1672 -- # true 00:26:52.959 10:40:46 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:52.959 10:40:46 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:52.959 10:40:46 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:52.959 10:40:46 -- common/autotest_common.sh@27 -- # exec 00:26:52.959 10:40:46 -- common/autotest_common.sh@29 -- # exec 00:26:52.959 10:40:46 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:52.959 10:40:46 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:52.959 10:40:46 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:52.959 10:40:46 -- common/autotest_common.sh@18 -- # set -x 00:26:52.959 10:40:46 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.959 10:40:46 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:52.959 10:40:46 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:52.959 10:40:46 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:52.959 10:40:46 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:52.960 10:40:46 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:52.960 10:40:46 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:52.960 10:40:46 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136732 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:52.960 10:40:46 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136732 /var/tmp/spdk.sock 00:26:52.960 10:40:46 -- common/autotest_common.sh@819 -- # '[' -z 136732 ']' 00:26:52.960 10:40:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.960 10:40:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:52.960 10:40:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.960 10:40:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:52.960 10:40:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.960 [2024-07-12 10:40:46.386493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.960 [2024-07-12 10:40:46.386865] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136732 ] 00:26:52.960 [2024-07-12 10:40:46.563307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:52.960 [2024-07-12 10:40:46.726506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.960 [2024-07-12 10:40:46.726675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.960 [2024-07-12 10:40:46.726671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.218 [2024-07-12 10:40:46.975518] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:53.476 10:40:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:53.476 10:40:47 -- common/autotest_common.sh@852 -- # return 0 00:26:53.476 10:40:47 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:53.476 10:40:47 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:54.043 Malloc0 00:26:54.043 Malloc1 00:26:54.043 Malloc2 00:26:54.043 10:40:47 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:54.043 10:40:47 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:54.043 10:40:47 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:54.043 10:40:47 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:54.043 5000+0 records in 00:26:54.043 5000+0 records out 00:26:54.043 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0252372 s, 406 MB/s 00:26:54.043 10:40:47 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:54.043 AIO0 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 136732 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 136732 without_thd 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136732 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:54.301 10:40:47 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:54.301 10:40:47 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:54.559 spdk_thread ids are 1 on reactor0. 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:54.559 10:40:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136732 0 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136732 0 idle 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:54.559 10:40:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:54.820 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136732 root 20 0 20.1t 145552 28588 S 13.3 1.2 0:00.66 reactor_0' 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@48 -- # echo 136732 root 20 0 20.1t 145552 28588 S 13.3 1.2 0:00.66 reactor_0 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=13.3 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=13 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ 13 -gt 30 ]] 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:54.821 10:40:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:54.821 10:40:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136732 1 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136732 1 idle 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:54.821 10:40:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136735 root 20 0 20.1t 145552 28588 S 0.0 1.2 0:00.00 reactor_1' 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@48 -- # echo 136735 root 20 0 20.1t 145552 28588 S 0.0 1.2 0:00.00 reactor_1 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:55.079 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:55.080 10:40:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:55.080 10:40:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136732 2 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136732 2 idle 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136736 root 20 0 20.1t 145552 28588 S 0.0 1.2 0:00.00 reactor_2' 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@48 -- # echo 136736 root 20 0 20.1t 145552 28588 S 0.0 1.2 0:00.00 reactor_2 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:55.080 10:40:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:55.080 10:40:48 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:55.080 10:40:48 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:55.080 10:40:48 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:55.338 [2024-07-12 10:40:49.196109] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:55.338 10:40:49 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:55.596 [2024-07-12 10:40:49.463851] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:55.596 [2024-07-12 10:40:49.464588] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:55.596 10:40:49 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:55.854 [2024-07-12 10:40:49.663737] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:55.854 [2024-07-12 10:40:49.664273] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:55.854 10:40:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:55.854 10:40:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136732 0 00:26:55.854 10:40:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136732 0 busy 00:26:55.854 10:40:49 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:55.854 10:40:49 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:55.854 10:40:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:55.854 10:40:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:55.855 10:40:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:55.855 10:40:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:55.855 10:40:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:55.855 10:40:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:55.855 10:40:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:56.112 10:40:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136732 root 20 0 20.1t 145668 28588 R 99.9 1.2 0:01.03 reactor_0' 00:26:56.112 10:40:49 -- interrupt/interrupt_common.sh@48 -- # echo 136732 root 20 0 20.1t 145668 28588 R 99.9 1.2 0:01.03 reactor_0 00:26:56.112 10:40:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:56.112 10:40:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:56.112 10:40:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:56.113 10:40:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:56.113 10:40:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136732 2 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136732 2 busy 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:56.113 10:40:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136736 root 20 0 20.1t 145668 28588 R 99.9 1.2 0:00.33 reactor_2' 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@48 -- # echo 136736 root 20 0 20.1t 145668 28588 R 99.9 1.2 0:00.33 reactor_2 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:56.113 10:40:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:56.113 10:40:50 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:56.370 [2024-07-12 10:40:50.251749] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:56.370 [2024-07-12 10:40:50.252329] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:56.370 10:40:50 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:56.370 10:40:50 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136732 2 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136732 2 idle 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:56.370 10:40:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136736 root 20 0 20.1t 145732 28588 S 0.0 1.2 0:00.58 reactor_2' 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@48 -- # echo 136736 root 20 0 20.1t 145732 28588 S 0.0 1.2 0:00.58 reactor_2 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:56.627 10:40:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:56.627 10:40:50 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:56.885 [2024-07-12 10:40:50.611756] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:56.885 [2024-07-12 10:40:50.612358] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:56.885 10:40:50 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:56.885 10:40:50 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:56.885 10:40:50 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:57.144 [2024-07-12 10:40:50.860069] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:57.144 10:40:50 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136732 0 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136732 0 idle 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@33 -- # local pid=136732 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136732 -w 256 00:26:57.144 10:40:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136732 root 20 0 20.1t 145824 28588 S 0.0 1.2 0:01.81 reactor_0' 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@48 -- # echo 136732 root 20 0 20.1t 145824 28588 S 0.0 1.2 0:01.81 reactor_0 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:57.144 10:40:51 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:57.144 10:40:51 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:57.144 10:40:51 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:57.144 10:40:51 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:57.144 10:40:51 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 136732 00:26:57.144 10:40:51 -- common/autotest_common.sh@926 -- # '[' -z 136732 ']' 00:26:57.144 10:40:51 -- common/autotest_common.sh@930 -- # kill -0 136732 00:26:57.144 10:40:51 -- common/autotest_common.sh@931 -- # uname 00:26:57.144 10:40:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.144 10:40:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136732 00:26:57.402 killing process with pid 136732 00:26:57.402 10:40:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:57.402 10:40:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:57.402 10:40:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136732' 00:26:57.402 10:40:51 -- common/autotest_common.sh@945 -- # kill 136732 00:26:57.402 10:40:51 -- common/autotest_common.sh@950 -- # wait 136732 00:26:58.337 10:40:52 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:58.337 10:40:52 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136898 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:58.337 10:40:52 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136898 /var/tmp/spdk.sock 00:26:58.337 10:40:52 -- common/autotest_common.sh@819 -- # '[' -z 136898 ']' 00:26:58.337 10:40:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.337 10:40:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.337 10:40:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.337 10:40:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.337 10:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:58.337 [2024-07-12 10:40:52.229665] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:58.337 [2024-07-12 10:40:52.230000] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136898 ] 00:26:58.595 [2024-07-12 10:40:52.390724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.853 [2024-07-12 10:40:52.556295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.853 [2024-07-12 10:40:52.556434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.853 [2024-07-12 10:40:52.556441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.111 [2024-07-12 10:40:52.800997] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:59.368 10:40:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.368 10:40:53 -- common/autotest_common.sh@852 -- # return 0 00:26:59.368 10:40:53 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:59.369 10:40:53 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.626 Malloc0 00:26:59.626 Malloc1 00:26:59.626 Malloc2 00:26:59.626 10:40:53 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:59.626 10:40:53 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:59.626 10:40:53 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:59.626 10:40:53 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:59.626 5000+0 records in 00:26:59.626 5000+0 records out 00:26:59.626 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0132516 s, 773 MB/s 00:26:59.626 10:40:53 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:59.884 AIO0 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 136898 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 136898 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136898 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:59.884 10:40:53 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:59.884 10:40:53 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:00.142 10:40:53 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:00.142 10:40:53 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:00.142 10:40:53 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:00.400 10:40:54 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:00.400 10:40:54 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:00.400 10:40:54 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:00.400 spdk_thread ids are 1 on reactor0. 00:27:00.401 10:40:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:00.401 10:40:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136898 0 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136898 0 idle 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:00.401 10:40:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136898 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.62 reactor_0' 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@48 -- # echo 136898 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.62 reactor_0 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:00.659 10:40:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:00.659 10:40:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136898 1 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136898 1 idle 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:00.659 10:40:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136901 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.00 reactor_1' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # echo 136901 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.00 reactor_1 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:00.918 10:40:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:00.918 10:40:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136898 2 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136898 2 idle 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136902 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.00 reactor_2' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # echo 136902 root 20 0 20.1t 145764 28792 S 0.0 1.2 0:00.00 reactor_2 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:00.918 10:40:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:00.918 10:40:54 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:27:00.918 10:40:54 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:01.176 [2024-07-12 10:40:55.033287] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:01.176 [2024-07-12 10:40:55.033764] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:27:01.176 [2024-07-12 10:40:55.034135] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:01.176 10:40:55 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:01.434 [2024-07-12 10:40:55.213118] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:01.434 [2024-07-12 10:40:55.213679] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:01.434 10:40:55 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:01.434 10:40:55 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136898 0 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136898 0 busy 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:01.434 10:40:55 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136898 root 20 0 20.1t 145832 28792 R 99.9 1.2 0:00.97 reactor_0' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # echo 136898 root 20 0 20.1t 145832 28792 R 99.9 1.2 0:00.97 reactor_0 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:01.693 10:40:55 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:01.693 10:40:55 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136898 2 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136898 2 busy 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136902 root 20 0 20.1t 145832 28792 R 93.8 1.2 0:00.33 reactor_2' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # echo 136902 root 20 0 20.1t 145832 28792 R 93.8 1.2 0:00.33 reactor_2 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:01.693 10:40:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:01.693 10:40:55 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:01.951 [2024-07-12 10:40:55.789418] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:01.952 [2024-07-12 10:40:55.789812] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:01.952 10:40:55 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:27:01.952 10:40:55 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136898 2 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136898 2 idle 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:01.952 10:40:55 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136902 root 20 0 20.1t 145896 28792 S 0.0 1.2 0:00.57 reactor_2' 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@48 -- # echo 136902 root 20 0 20.1t 145896 28792 S 0.0 1.2 0:00.57 reactor_2 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:02.210 10:40:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:02.210 10:40:55 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:02.468 [2024-07-12 10:40:56.205448] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:02.468 [2024-07-12 10:40:56.205982] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:27:02.468 [2024-07-12 10:40:56.206180] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:02.468 10:40:56 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:27:02.468 10:40:56 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136898 0 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136898 0 idle 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@33 -- # local pid=136898 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136898 -w 256 00:27:02.468 10:40:56 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:02.469 10:40:56 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136898 root 20 0 20.1t 145936 28792 S 0.0 1.2 0:01.80 reactor_0' 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@48 -- # echo 136898 root 20 0 20.1t 145936 28792 S 0.0 1.2 0:01.80 reactor_0 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:02.726 10:40:56 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:02.726 10:40:56 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:02.726 10:40:56 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:27:02.726 10:40:56 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:02.726 10:40:56 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 136898 00:27:02.726 10:40:56 -- common/autotest_common.sh@926 -- # '[' -z 136898 ']' 00:27:02.726 10:40:56 -- common/autotest_common.sh@930 -- # kill -0 136898 00:27:02.726 10:40:56 -- common/autotest_common.sh@931 -- # uname 00:27:02.726 10:40:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.726 10:40:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136898 00:27:02.726 killing process with pid 136898 00:27:02.726 10:40:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:02.726 10:40:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:02.726 10:40:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136898' 00:27:02.726 10:40:56 -- common/autotest_common.sh@945 -- # kill 136898 00:27:02.726 10:40:56 -- common/autotest_common.sh@950 -- # wait 136898 00:27:03.661 10:40:57 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:27:03.662 10:40:57 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:03.662 ************************************ 00:27:03.662 END TEST reactor_set_interrupt 00:27:03.662 ************************************ 00:27:03.662 00:27:03.662 real 0m11.396s 00:27:03.662 user 0m11.561s 00:27:03.662 sys 0m1.452s 00:27:03.662 10:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.662 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:27:03.923 10:40:57 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:03.923 10:40:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:03.923 10:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.923 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:27:03.923 ************************************ 00:27:03.923 START TEST reap_unregistered_poller 00:27:03.923 ************************************ 00:27:03.923 10:40:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:03.923 * Looking for test storage... 00:27:03.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.923 10:40:57 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:03.923 10:40:57 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:03.923 10:40:57 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:03.923 10:40:57 -- common/autotest_common.sh@34 -- # set -e 00:27:03.923 10:40:57 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:03.923 10:40:57 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:03.923 10:40:57 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:03.923 10:40:57 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:03.923 10:40:57 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:03.923 10:40:57 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:27:03.923 10:40:57 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:27:03.923 10:40:57 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:27:03.923 10:40:57 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:27:03.923 10:40:57 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:27:03.923 10:40:57 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:27:03.923 10:40:57 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:27:03.923 10:40:57 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:27:03.923 10:40:57 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:27:03.923 10:40:57 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:27:03.923 10:40:57 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:27:03.923 10:40:57 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:27:03.923 10:40:57 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:27:03.923 10:40:57 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:27:03.923 10:40:57 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:27:03.923 10:40:57 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:27:03.923 10:40:57 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:27:03.923 10:40:57 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:27:03.923 10:40:57 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:03.923 10:40:57 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:27:03.923 10:40:57 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:27:03.923 10:40:57 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:27:03.923 10:40:57 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:27:03.923 10:40:57 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:03.923 10:40:57 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:27:03.923 10:40:57 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:27:03.923 10:40:57 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:27:03.923 10:40:57 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:27:03.923 10:40:57 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:27:03.923 10:40:57 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:27:03.923 10:40:57 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:27:03.923 10:40:57 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:03.923 10:40:57 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:27:03.923 10:40:57 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:27:03.923 10:40:57 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:27:03.923 10:40:57 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:27:03.923 10:40:57 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:27:03.923 10:40:57 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:27:03.923 10:40:57 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:03.923 10:40:57 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:27:03.923 10:40:57 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:27:03.923 10:40:57 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:27:03.923 10:40:57 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:27:03.923 10:40:57 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:27:03.923 10:40:57 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:03.923 10:40:57 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:27:03.923 10:40:57 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:27:03.923 10:40:57 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:27:03.923 10:40:57 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:03.923 10:40:57 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:27:03.923 10:40:57 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:27:03.923 10:40:57 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:27:03.923 10:40:57 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:03.923 10:40:57 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:03.923 10:40:57 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:27:03.923 10:40:57 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:27:03.923 10:40:57 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:27:03.923 10:40:57 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:27:03.923 10:40:57 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:27:03.923 10:40:57 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:27:03.924 10:40:57 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:27:03.924 10:40:57 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:03.924 10:40:57 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:27:03.924 10:40:57 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:27:03.924 10:40:57 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:27:03.924 10:40:57 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:27:03.924 10:40:57 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:27:03.924 10:40:57 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:03.924 10:40:57 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:27:03.924 10:40:57 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:27:03.924 10:40:57 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:27:03.924 10:40:57 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:27:03.924 10:40:57 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:03.924 10:40:57 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:03.924 10:40:57 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:03.924 10:40:57 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:03.924 10:40:57 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:03.924 10:40:57 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:03.924 10:40:57 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:03.924 10:40:57 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:03.924 10:40:57 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:03.924 10:40:57 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:03.924 10:40:57 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:03.924 10:40:57 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:03.924 10:40:57 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:03.924 10:40:57 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:03.924 10:40:57 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:03.924 10:40:57 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:03.924 10:40:57 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:03.924 #define SPDK_CONFIG_H 00:27:03.924 #define SPDK_CONFIG_APPS 1 00:27:03.924 #define SPDK_CONFIG_ARCH native 00:27:03.924 #define SPDK_CONFIG_ASAN 1 00:27:03.924 #undef SPDK_CONFIG_AVAHI 00:27:03.924 #undef SPDK_CONFIG_CET 00:27:03.924 #define SPDK_CONFIG_COVERAGE 1 00:27:03.924 #define SPDK_CONFIG_CROSS_PREFIX 00:27:03.924 #undef SPDK_CONFIG_CRYPTO 00:27:03.924 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:03.924 #undef SPDK_CONFIG_CUSTOMOCF 00:27:03.924 #undef SPDK_CONFIG_DAOS 00:27:03.924 #define SPDK_CONFIG_DAOS_DIR 00:27:03.924 #define SPDK_CONFIG_DEBUG 1 00:27:03.924 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:03.924 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:03.924 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:03.924 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:03.924 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:03.924 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:03.924 #define SPDK_CONFIG_EXAMPLES 1 00:27:03.924 #undef SPDK_CONFIG_FC 00:27:03.924 #define SPDK_CONFIG_FC_PATH 00:27:03.924 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:03.924 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:03.924 #undef SPDK_CONFIG_FUSE 00:27:03.924 #undef SPDK_CONFIG_FUZZER 00:27:03.924 #define SPDK_CONFIG_FUZZER_LIB 00:27:03.924 #undef SPDK_CONFIG_GOLANG 00:27:03.924 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:03.924 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:03.924 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:03.924 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:03.924 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:03.924 #define SPDK_CONFIG_IDXD 1 00:27:03.924 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:03.924 #undef SPDK_CONFIG_IPSEC_MB 00:27:03.924 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:03.924 #define SPDK_CONFIG_ISAL 1 00:27:03.924 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:03.924 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:03.924 #define SPDK_CONFIG_LIBDIR 00:27:03.924 #undef SPDK_CONFIG_LTO 00:27:03.924 #define SPDK_CONFIG_MAX_LCORES 00:27:03.924 #define SPDK_CONFIG_NVME_CUSE 1 00:27:03.924 #undef SPDK_CONFIG_OCF 00:27:03.924 #define SPDK_CONFIG_OCF_PATH 00:27:03.924 #define SPDK_CONFIG_OPENSSL_PATH 00:27:03.924 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:03.924 #undef SPDK_CONFIG_PGO_USE 00:27:03.924 #define SPDK_CONFIG_PREFIX /usr/local 00:27:03.924 #define SPDK_CONFIG_RAID5F 1 00:27:03.924 #undef SPDK_CONFIG_RBD 00:27:03.924 #define SPDK_CONFIG_RDMA 1 00:27:03.924 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:03.924 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:03.924 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:03.924 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:03.924 #undef SPDK_CONFIG_SHARED 00:27:03.924 #undef SPDK_CONFIG_SMA 00:27:03.924 #define SPDK_CONFIG_TESTS 1 00:27:03.924 #undef SPDK_CONFIG_TSAN 00:27:03.924 #undef SPDK_CONFIG_UBLK 00:27:03.924 #define SPDK_CONFIG_UBSAN 1 00:27:03.924 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:03.924 #undef SPDK_CONFIG_URING 00:27:03.924 #define SPDK_CONFIG_URING_PATH 00:27:03.924 #undef SPDK_CONFIG_URING_ZNS 00:27:03.924 #undef SPDK_CONFIG_USDT 00:27:03.924 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:03.924 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:03.924 #undef SPDK_CONFIG_VFIO_USER 00:27:03.924 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:03.924 #define SPDK_CONFIG_VHOST 1 00:27:03.924 #define SPDK_CONFIG_VIRTIO 1 00:27:03.924 #undef SPDK_CONFIG_VTUNE 00:27:03.924 #define SPDK_CONFIG_VTUNE_DIR 00:27:03.924 #define SPDK_CONFIG_WERROR 1 00:27:03.924 #define SPDK_CONFIG_WPDK_DIR 00:27:03.924 #undef SPDK_CONFIG_XNVME 00:27:03.924 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:03.924 10:40:57 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:03.924 10:40:57 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:03.924 10:40:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.924 10:40:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.924 10:40:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.924 10:40:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:03.924 10:40:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:03.924 10:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:03.924 10:40:57 -- paths/export.sh@5 -- # export PATH 00:27:03.924 10:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:03.924 10:40:57 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:03.924 10:40:57 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:03.924 10:40:57 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:03.924 10:40:57 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:03.924 10:40:57 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:03.924 10:40:57 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:03.924 10:40:57 -- pm/common@16 -- # TEST_TAG=N/A 00:27:03.924 10:40:57 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:03.925 10:40:57 -- common/autotest_common.sh@52 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:03.925 10:40:57 -- common/autotest_common.sh@56 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:03.925 10:40:57 -- common/autotest_common.sh@58 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:03.925 10:40:57 -- common/autotest_common.sh@60 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:03.925 10:40:57 -- common/autotest_common.sh@62 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:03.925 10:40:57 -- common/autotest_common.sh@64 -- # : 00:27:03.925 10:40:57 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:03.925 10:40:57 -- common/autotest_common.sh@66 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:03.925 10:40:57 -- common/autotest_common.sh@68 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:03.925 10:40:57 -- common/autotest_common.sh@70 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:03.925 10:40:57 -- common/autotest_common.sh@72 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:03.925 10:40:57 -- common/autotest_common.sh@74 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:03.925 10:40:57 -- common/autotest_common.sh@76 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:03.925 10:40:57 -- common/autotest_common.sh@78 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:03.925 10:40:57 -- common/autotest_common.sh@80 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:03.925 10:40:57 -- common/autotest_common.sh@82 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:03.925 10:40:57 -- common/autotest_common.sh@84 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:03.925 10:40:57 -- common/autotest_common.sh@86 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:03.925 10:40:57 -- common/autotest_common.sh@88 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:03.925 10:40:57 -- common/autotest_common.sh@90 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:03.925 10:40:57 -- common/autotest_common.sh@92 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:03.925 10:40:57 -- common/autotest_common.sh@94 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:03.925 10:40:57 -- common/autotest_common.sh@96 -- # : rdma 00:27:03.925 10:40:57 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:03.925 10:40:57 -- common/autotest_common.sh@98 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:03.925 10:40:57 -- common/autotest_common.sh@100 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:03.925 10:40:57 -- common/autotest_common.sh@102 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:03.925 10:40:57 -- common/autotest_common.sh@104 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:03.925 10:40:57 -- common/autotest_common.sh@106 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:03.925 10:40:57 -- common/autotest_common.sh@108 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:03.925 10:40:57 -- common/autotest_common.sh@110 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:03.925 10:40:57 -- common/autotest_common.sh@112 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:03.925 10:40:57 -- common/autotest_common.sh@114 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:03.925 10:40:57 -- common/autotest_common.sh@116 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:03.925 10:40:57 -- common/autotest_common.sh@118 -- # : 00:27:03.925 10:40:57 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:03.925 10:40:57 -- common/autotest_common.sh@120 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:03.925 10:40:57 -- common/autotest_common.sh@122 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:03.925 10:40:57 -- common/autotest_common.sh@124 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:03.925 10:40:57 -- common/autotest_common.sh@126 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:03.925 10:40:57 -- common/autotest_common.sh@128 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:03.925 10:40:57 -- common/autotest_common.sh@130 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:03.925 10:40:57 -- common/autotest_common.sh@132 -- # : 00:27:03.925 10:40:57 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:03.925 10:40:57 -- common/autotest_common.sh@134 -- # : true 00:27:03.925 10:40:57 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:03.925 10:40:57 -- common/autotest_common.sh@136 -- # : 1 00:27:03.925 10:40:57 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:03.925 10:40:57 -- common/autotest_common.sh@138 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:03.925 10:40:57 -- common/autotest_common.sh@140 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:03.925 10:40:57 -- common/autotest_common.sh@142 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:03.925 10:40:57 -- common/autotest_common.sh@144 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:03.925 10:40:57 -- common/autotest_common.sh@146 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:03.925 10:40:57 -- common/autotest_common.sh@148 -- # : 00:27:03.925 10:40:57 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:03.925 10:40:57 -- common/autotest_common.sh@150 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:03.925 10:40:57 -- common/autotest_common.sh@152 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:03.925 10:40:57 -- common/autotest_common.sh@154 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:03.925 10:40:57 -- common/autotest_common.sh@156 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:03.925 10:40:57 -- common/autotest_common.sh@158 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:03.925 10:40:57 -- common/autotest_common.sh@160 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:03.925 10:40:57 -- common/autotest_common.sh@163 -- # : 00:27:03.925 10:40:57 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:03.925 10:40:57 -- common/autotest_common.sh@165 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:03.925 10:40:57 -- common/autotest_common.sh@167 -- # : 0 00:27:03.925 10:40:57 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:03.925 10:40:57 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:03.925 10:40:57 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:03.926 10:40:57 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:03.926 10:40:57 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:03.926 10:40:57 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:03.926 10:40:57 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:03.926 10:40:57 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:03.926 10:40:57 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:03.926 10:40:57 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:03.926 10:40:57 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:03.926 10:40:57 -- common/autotest_common.sh@196 -- # cat 00:27:03.926 10:40:57 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:03.926 10:40:57 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:03.926 10:40:57 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:03.926 10:40:57 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:03.926 10:40:57 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:03.926 10:40:57 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:03.926 10:40:57 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:03.926 10:40:57 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:03.926 10:40:57 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:03.926 10:40:57 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:03.926 10:40:57 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:03.926 10:40:57 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:27:03.926 10:40:57 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:27:03.926 10:40:57 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:03.926 10:40:57 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:03.926 10:40:57 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:03.926 10:40:57 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:03.926 10:40:57 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:03.926 10:40:57 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:03.926 10:40:57 -- common/autotest_common.sh@249 -- # valgrind= 00:27:03.926 10:40:57 -- common/autotest_common.sh@255 -- # uname -s 00:27:03.926 10:40:57 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:03.926 10:40:57 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:03.926 10:40:57 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:03.926 10:40:57 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:03.926 10:40:57 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:03.926 10:40:57 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:03.926 10:40:57 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:03.926 10:40:57 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:03.926 10:40:57 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:03.926 10:40:57 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:03.926 10:40:57 -- common/autotest_common.sh@309 -- # [[ -z 137066 ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@309 -- # kill -0 137066 00:27:03.926 10:40:57 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:03.926 10:40:57 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:03.926 10:40:57 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:03.926 10:40:57 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:03.926 10:40:57 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:03.926 10:40:57 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:03.926 10:40:57 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:03.926 10:40:57 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.Nlun3o 00:27:03.926 10:40:57 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:03.926 10:40:57 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:03.926 10:40:57 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Nlun3o/tests/interrupt /tmp/spdk.Nlun3o 00:27:03.926 10:40:57 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@318 -- # df -T 00:27:03.926 10:40:57 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616115200 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983901696 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269964288 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:27:03.926 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:27:03.926 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:03.926 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=98674139136 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=1028640768 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:27:03.927 10:40:57 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:27:03.927 10:40:57 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:27:03.927 10:40:57 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:27:03.927 10:40:57 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:03.927 10:40:57 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:03.927 * Looking for test storage... 00:27:03.927 10:40:57 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:03.927 10:40:57 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:03.927 10:40:57 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.927 10:40:57 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:03.927 10:40:57 -- common/autotest_common.sh@363 -- # mount=/ 00:27:03.927 10:40:57 -- common/autotest_common.sh@365 -- # target_space=10616115200 00:27:03.927 10:40:57 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:03.927 10:40:57 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:03.927 10:40:57 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:27:03.927 10:40:57 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:27:03.927 10:40:57 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:27:03.927 10:40:57 -- common/autotest_common.sh@372 -- # new_size=12198494208 00:27:03.927 10:40:57 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:03.927 10:40:57 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.927 10:40:57 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.927 10:40:57 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:03.927 10:40:57 -- common/autotest_common.sh@380 -- # return 0 00:27:03.927 10:40:57 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:03.927 10:40:57 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:03.927 10:40:57 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:03.927 10:40:57 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:03.927 10:40:57 -- common/autotest_common.sh@1672 -- # true 00:27:03.927 10:40:57 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:03.927 10:40:57 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:03.927 10:40:57 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:03.927 10:40:57 -- common/autotest_common.sh@27 -- # exec 00:27:03.927 10:40:57 -- common/autotest_common.sh@29 -- # exec 00:27:03.927 10:40:57 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:03.927 10:40:57 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:03.927 10:40:57 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:03.927 10:40:57 -- common/autotest_common.sh@18 -- # set -x 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:03.927 10:40:57 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:03.927 10:40:57 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:03.927 10:40:57 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=137106 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:03.927 10:40:57 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 137106 /var/tmp/spdk.sock 00:27:03.927 10:40:57 -- common/autotest_common.sh@819 -- # '[' -z 137106 ']' 00:27:03.927 10:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.927 10:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.927 10:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.927 10:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.927 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:27:04.186 [2024-07-12 10:40:57.849321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:04.187 [2024-07-12 10:40:57.849730] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137106 ] 00:27:04.187 [2024-07-12 10:40:58.026959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:04.445 [2024-07-12 10:40:58.210502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.445 [2024-07-12 10:40:58.210648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.445 [2024-07-12 10:40:58.210643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.703 [2024-07-12 10:40:58.467406] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:04.961 10:40:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:04.961 10:40:58 -- common/autotest_common.sh@852 -- # return 0 00:27:04.961 10:40:58 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:27:04.961 10:40:58 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:27:04.962 10:40:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.962 10:40:58 -- common/autotest_common.sh@10 -- # set +x 00:27:04.962 10:40:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.962 10:40:58 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:27:04.962 "name": "app_thread", 00:27:04.962 "id": 1, 00:27:04.962 "active_pollers": [], 00:27:04.962 "timed_pollers": [ 00:27:04.962 { 00:27:04.962 "name": "rpc_subsystem_poll", 00:27:04.962 "id": 1, 00:27:04.962 "state": "waiting", 00:27:04.962 "run_count": 0, 00:27:04.962 "busy_count": 0, 00:27:04.962 "period_ticks": 8800000 00:27:04.962 } 00:27:04.962 ], 00:27:04.962 "paused_pollers": [] 00:27:04.962 }' 00:27:04.962 10:40:58 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:27:05.226 10:40:58 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:27:05.226 10:40:58 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:27:05.226 10:40:58 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:27:05.226 10:40:58 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:27:05.226 10:40:58 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:27:05.226 10:40:58 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:05.226 10:40:58 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:05.226 10:40:58 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:05.226 5000+0 records in 00:27:05.226 5000+0 records out 00:27:05.226 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0194796 s, 526 MB/s 00:27:05.226 10:40:59 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:05.547 AIO0 00:27:05.547 10:40:59 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:05.805 10:40:59 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:27:05.805 10:40:59 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:27:05.805 10:40:59 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:27:05.805 10:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.805 10:40:59 -- common/autotest_common.sh@10 -- # set +x 00:27:05.805 10:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.805 10:40:59 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:27:05.805 "name": "app_thread", 00:27:05.805 "id": 1, 00:27:05.805 "active_pollers": [], 00:27:05.805 "timed_pollers": [ 00:27:05.805 { 00:27:05.805 "name": "rpc_subsystem_poll", 00:27:05.805 "id": 1, 00:27:05.805 "state": "waiting", 00:27:05.805 "run_count": 0, 00:27:05.805 "busy_count": 0, 00:27:05.805 "period_ticks": 8800000 00:27:05.805 } 00:27:05.805 ], 00:27:05.805 "paused_pollers": [] 00:27:05.805 }' 00:27:05.805 10:40:59 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:06.064 10:40:59 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 137106 00:27:06.064 10:40:59 -- common/autotest_common.sh@926 -- # '[' -z 137106 ']' 00:27:06.064 10:40:59 -- common/autotest_common.sh@930 -- # kill -0 137106 00:27:06.064 10:40:59 -- common/autotest_common.sh@931 -- # uname 00:27:06.064 10:40:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:06.064 10:40:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137106 00:27:06.064 10:40:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:06.064 10:40:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:06.064 10:40:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137106' 00:27:06.064 killing process with pid 137106 00:27:06.064 10:40:59 -- common/autotest_common.sh@945 -- # kill 137106 00:27:06.064 10:40:59 -- common/autotest_common.sh@950 -- # wait 137106 00:27:06.998 10:41:00 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:27:06.998 10:41:00 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:06.998 ************************************ 00:27:06.998 END TEST reap_unregistered_poller 00:27:06.998 ************************************ 00:27:06.998 00:27:06.998 real 0m3.258s 00:27:06.998 user 0m2.689s 00:27:06.998 sys 0m0.494s 00:27:06.999 10:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.999 10:41:00 -- common/autotest_common.sh@10 -- # set +x 00:27:06.999 10:41:00 -- spdk/autotest.sh@204 -- # uname -s 00:27:06.999 10:41:00 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:27:06.999 10:41:00 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:27:06.999 10:41:00 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:27:06.999 10:41:00 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:06.999 10:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:06.999 10:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.999 10:41:00 -- common/autotest_common.sh@10 -- # set +x 00:27:06.999 ************************************ 00:27:06.999 START TEST spdk_dd 00:27:06.999 ************************************ 00:27:06.999 10:41:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:07.258 * Looking for test storage... 00:27:07.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:07.258 10:41:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:07.258 10:41:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.258 10:41:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.258 10:41:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.258 10:41:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:07.258 10:41:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:07.258 10:41:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:07.258 10:41:00 -- paths/export.sh@5 -- # export PATH 00:27:07.258 10:41:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:07.258 10:41:00 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:07.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:07.516 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:08.451 10:41:02 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:27:08.451 10:41:02 -- dd/dd.sh@11 -- # nvme_in_userspace 00:27:08.451 10:41:02 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:08.451 10:41:02 -- scripts/common.sh@312 -- # local nvmes 00:27:08.451 10:41:02 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:08.451 10:41:02 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:08.451 10:41:02 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:08.451 10:41:02 -- scripts/common.sh@297 -- # local bdf= 00:27:08.451 10:41:02 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:08.451 10:41:02 -- scripts/common.sh@232 -- # local class 00:27:08.451 10:41:02 -- scripts/common.sh@233 -- # local subclass 00:27:08.451 10:41:02 -- scripts/common.sh@234 -- # local progif 00:27:08.451 10:41:02 -- scripts/common.sh@235 -- # printf %02x 1 00:27:08.451 10:41:02 -- scripts/common.sh@235 -- # class=01 00:27:08.451 10:41:02 -- scripts/common.sh@236 -- # printf %02x 8 00:27:08.451 10:41:02 -- scripts/common.sh@236 -- # subclass=08 00:27:08.451 10:41:02 -- scripts/common.sh@237 -- # printf %02x 2 00:27:08.451 10:41:02 -- scripts/common.sh@237 -- # progif=02 00:27:08.451 10:41:02 -- scripts/common.sh@239 -- # hash lspci 00:27:08.451 10:41:02 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:08.451 10:41:02 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:08.451 10:41:02 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:08.451 10:41:02 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:08.451 10:41:02 -- scripts/common.sh@244 -- # tr -d '"' 00:27:08.710 10:41:02 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:08.710 10:41:02 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:08.710 10:41:02 -- scripts/common.sh@15 -- # local i 00:27:08.710 10:41:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:08.710 10:41:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:08.710 10:41:02 -- scripts/common.sh@24 -- # return 0 00:27:08.710 10:41:02 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:08.710 10:41:02 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:08.710 10:41:02 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:08.710 10:41:02 -- scripts/common.sh@322 -- # uname -s 00:27:08.710 10:41:02 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:08.710 10:41:02 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:08.710 10:41:02 -- scripts/common.sh@327 -- # (( 1 )) 00:27:08.711 10:41:02 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:27:08.711 10:41:02 -- dd/dd.sh@13 -- # check_liburing 00:27:08.711 10:41:02 -- dd/common.sh@139 -- # local lib so 00:27:08.711 10:41:02 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:27:08.711 10:41:02 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:27:08.711 10:41:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:08.711 10:41:02 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:27:08.711 10:41:02 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:08.711 10:41:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:08.711 10:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.711 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:08.711 ************************************ 00:27:08.711 START TEST spdk_dd_basic_rw 00:27:08.711 ************************************ 00:27:08.711 10:41:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:08.711 * Looking for test storage... 00:27:08.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:08.711 10:41:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:08.711 10:41:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.711 10:41:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.711 10:41:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.711 10:41:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.711 10:41:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.711 10:41:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.711 10:41:02 -- paths/export.sh@5 -- # export PATH 00:27:08.711 10:41:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.711 10:41:02 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:27:08.711 10:41:02 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:27:08.711 10:41:02 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:27:08.711 10:41:02 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:27:08.711 10:41:02 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:27:08.711 10:41:02 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:27:08.711 10:41:02 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:27:08.711 10:41:02 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:08.711 10:41:02 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:08.711 10:41:02 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:27:08.711 10:41:02 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:27:08.711 10:41:02 -- dd/common.sh@126 -- # mapfile -t id 00:27:08.711 10:41:02 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:27:08.972 10:41:02 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2292 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:27:08.972 10:41:02 -- dd/common.sh@130 -- # lbaf=04 00:27:08.972 10:41:02 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2292 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:27:08.972 10:41:02 -- dd/common.sh@132 -- # lbaf=4096 00:27:08.972 10:41:02 -- dd/common.sh@134 -- # echo 4096 00:27:08.972 10:41:02 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:27:08.972 10:41:02 -- dd/basic_rw.sh@96 -- # : 00:27:08.973 10:41:02 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:08.973 10:41:02 -- dd/basic_rw.sh@96 -- # gen_conf 00:27:08.973 10:41:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:08.973 10:41:02 -- dd/common.sh@31 -- # xtrace_disable 00:27:08.973 10:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.973 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:08.973 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:27:08.973 ************************************ 00:27:08.973 START TEST dd_bs_lt_native_bs 00:27:08.973 ************************************ 00:27:08.973 10:41:02 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:08.973 10:41:02 -- common/autotest_common.sh@640 -- # local es=0 00:27:08.973 10:41:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:08.973 10:41:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.973 10:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:08.973 10:41:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.973 10:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:08.973 10:41:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.973 10:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:08.973 10:41:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.973 10:41:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:08.973 10:41:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:08.973 { 00:27:08.973 "subsystems": [ 00:27:08.973 { 00:27:08.973 "subsystem": "bdev", 00:27:08.973 "config": [ 00:27:08.973 { 00:27:08.973 "params": { 00:27:08.973 "trtype": "pcie", 00:27:08.973 "traddr": "0000:00:06.0", 00:27:08.973 "name": "Nvme0" 00:27:08.973 }, 00:27:08.973 "method": "bdev_nvme_attach_controller" 00:27:08.973 }, 00:27:08.973 { 00:27:08.973 "method": "bdev_wait_for_examine" 00:27:08.973 } 00:27:08.973 ] 00:27:08.973 } 00:27:08.973 ] 00:27:08.973 } 00:27:08.973 [2024-07-12 10:41:02.860517] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:08.973 [2024-07-12 10:41:02.860855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137432 ] 00:27:09.232 [2024-07-12 10:41:03.034053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.489 [2024-07-12 10:41:03.262482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.746 [2024-07-12 10:41:03.582876] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:27:09.746 [2024-07-12 10:41:03.583220] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.311 [2024-07-12 10:41:04.164244] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:10.877 ************************************ 00:27:10.877 END TEST dd_bs_lt_native_bs 00:27:10.877 ************************************ 00:27:10.877 10:41:04 -- common/autotest_common.sh@643 -- # es=234 00:27:10.877 10:41:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:10.877 10:41:04 -- common/autotest_common.sh@652 -- # es=106 00:27:10.877 10:41:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:10.877 10:41:04 -- common/autotest_common.sh@660 -- # es=1 00:27:10.877 10:41:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:10.877 00:27:10.877 real 0m1.722s 00:27:10.877 user 0m1.426s 00:27:10.877 sys 0m0.259s 00:27:10.877 10:41:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.877 10:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:10.877 10:41:04 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:27:10.877 10:41:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:10.877 10:41:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.877 10:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:10.877 ************************************ 00:27:10.877 START TEST dd_rw 00:27:10.877 ************************************ 00:27:10.877 10:41:04 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:27:10.877 10:41:04 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:27:10.877 10:41:04 -- dd/basic_rw.sh@12 -- # local count size 00:27:10.877 10:41:04 -- dd/basic_rw.sh@13 -- # local qds bss 00:27:10.877 10:41:04 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:27:10.877 10:41:04 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:10.877 10:41:04 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:10.877 10:41:04 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:10.877 10:41:04 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:10.877 10:41:04 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:10.877 10:41:04 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:10.877 10:41:04 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:10.877 10:41:04 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:10.877 10:41:04 -- dd/basic_rw.sh@23 -- # count=15 00:27:10.877 10:41:04 -- dd/basic_rw.sh@24 -- # count=15 00:27:10.877 10:41:04 -- dd/basic_rw.sh@25 -- # size=61440 00:27:10.877 10:41:04 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:10.877 10:41:04 -- dd/common.sh@98 -- # xtrace_disable 00:27:10.877 10:41:04 -- common/autotest_common.sh@10 -- # set +x 00:27:11.444 10:41:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:27:11.444 10:41:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:11.444 10:41:05 -- dd/common.sh@31 -- # xtrace_disable 00:27:11.444 10:41:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.444 [2024-07-12 10:41:05.148762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.444 [2024-07-12 10:41:05.149224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137483 ] 00:27:11.444 { 00:27:11.444 "subsystems": [ 00:27:11.444 { 00:27:11.444 "subsystem": "bdev", 00:27:11.444 "config": [ 00:27:11.444 { 00:27:11.444 "params": { 00:27:11.444 "trtype": "pcie", 00:27:11.444 "traddr": "0000:00:06.0", 00:27:11.444 "name": "Nvme0" 00:27:11.444 }, 00:27:11.444 "method": "bdev_nvme_attach_controller" 00:27:11.444 }, 00:27:11.444 { 00:27:11.444 "method": "bdev_wait_for_examine" 00:27:11.444 } 00:27:11.444 ] 00:27:11.444 } 00:27:11.444 ] 00:27:11.444 } 00:27:11.444 [2024-07-12 10:41:05.300045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.705 [2024-07-12 10:41:05.478995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.898  Copying: 60/60 [kB] (average 19 MBps) 00:27:12.898 00:27:12.898 10:41:06 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:27:12.898 10:41:06 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:12.898 10:41:06 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.898 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:27:12.898 [2024-07-12 10:41:06.745364] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:12.898 [2024-07-12 10:41:06.745750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137518 ] 00:27:12.898 { 00:27:12.898 "subsystems": [ 00:27:12.898 { 00:27:12.898 "subsystem": "bdev", 00:27:12.898 "config": [ 00:27:12.898 { 00:27:12.898 "params": { 00:27:12.898 "trtype": "pcie", 00:27:12.898 "traddr": "0000:00:06.0", 00:27:12.898 "name": "Nvme0" 00:27:12.898 }, 00:27:12.898 "method": "bdev_nvme_attach_controller" 00:27:12.898 }, 00:27:12.898 { 00:27:12.898 "method": "bdev_wait_for_examine" 00:27:12.898 } 00:27:12.898 ] 00:27:12.898 } 00:27:12.898 ] 00:27:12.898 } 00:27:13.156 [2024-07-12 10:41:06.913396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.414 [2024-07-12 10:41:07.074472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.608  Copying: 60/60 [kB] (average 19 MBps) 00:27:14.608 00:27:14.608 10:41:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.608 10:41:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:14.608 10:41:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:14.608 10:41:08 -- dd/common.sh@11 -- # local nvme_ref= 00:27:14.608 10:41:08 -- dd/common.sh@12 -- # local size=61440 00:27:14.608 10:41:08 -- dd/common.sh@14 -- # local bs=1048576 00:27:14.608 10:41:08 -- dd/common.sh@15 -- # local count=1 00:27:14.608 10:41:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:14.608 10:41:08 -- dd/common.sh@18 -- # gen_conf 00:27:14.608 10:41:08 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.608 10:41:08 -- common/autotest_common.sh@10 -- # set +x 00:27:14.608 { 00:27:14.608 "subsystems": [ 00:27:14.608 { 00:27:14.608 "subsystem": "bdev", 00:27:14.608 "config": [ 00:27:14.608 { 00:27:14.608 "params": { 00:27:14.608 "trtype": "pcie", 00:27:14.608 "traddr": "0000:00:06.0", 00:27:14.608 "name": "Nvme0" 00:27:14.608 }, 00:27:14.608 "method": "bdev_nvme_attach_controller" 00:27:14.608 }, 00:27:14.608 { 00:27:14.608 "method": "bdev_wait_for_examine" 00:27:14.608 } 00:27:14.608 ] 00:27:14.608 } 00:27:14.608 ] 00:27:14.608 } 00:27:14.608 [2024-07-12 10:41:08.403110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:14.608 [2024-07-12 10:41:08.403493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137547 ] 00:27:14.866 [2024-07-12 10:41:08.570359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.866 [2024-07-12 10:41:08.743163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.366  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:16.366 00:27:16.366 10:41:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:16.366 10:41:09 -- dd/basic_rw.sh@23 -- # count=15 00:27:16.366 10:41:09 -- dd/basic_rw.sh@24 -- # count=15 00:27:16.366 10:41:09 -- dd/basic_rw.sh@25 -- # size=61440 00:27:16.366 10:41:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:16.366 10:41:09 -- dd/common.sh@98 -- # xtrace_disable 00:27:16.366 10:41:09 -- common/autotest_common.sh@10 -- # set +x 00:27:16.624 10:41:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:27:16.624 10:41:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:16.624 10:41:10 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.624 10:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:16.882 [2024-07-12 10:41:10.538599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:16.882 [2024-07-12 10:41:10.538989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137597 ] 00:27:16.882 { 00:27:16.882 "subsystems": [ 00:27:16.882 { 00:27:16.882 "subsystem": "bdev", 00:27:16.882 "config": [ 00:27:16.882 { 00:27:16.882 "params": { 00:27:16.882 "trtype": "pcie", 00:27:16.882 "traddr": "0000:00:06.0", 00:27:16.882 "name": "Nvme0" 00:27:16.882 }, 00:27:16.882 "method": "bdev_nvme_attach_controller" 00:27:16.882 }, 00:27:16.882 { 00:27:16.882 "method": "bdev_wait_for_examine" 00:27:16.882 } 00:27:16.882 ] 00:27:16.882 } 00:27:16.882 ] 00:27:16.882 } 00:27:16.882 [2024-07-12 10:41:10.706037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.141 [2024-07-12 10:41:10.870193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.334  Copying: 60/60 [kB] (average 58 MBps) 00:27:18.334 00:27:18.334 10:41:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:27:18.334 10:41:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:18.334 10:41:12 -- dd/common.sh@31 -- # xtrace_disable 00:27:18.334 10:41:12 -- common/autotest_common.sh@10 -- # set +x 00:27:18.334 { 00:27:18.334 "subsystems": [ 00:27:18.334 { 00:27:18.334 "subsystem": "bdev", 00:27:18.334 "config": [ 00:27:18.334 { 00:27:18.334 "params": { 00:27:18.334 "trtype": "pcie", 00:27:18.334 "traddr": "0000:00:06.0", 00:27:18.334 "name": "Nvme0" 00:27:18.334 }, 00:27:18.334 "method": "bdev_nvme_attach_controller" 00:27:18.334 }, 00:27:18.334 { 00:27:18.334 "method": "bdev_wait_for_examine" 00:27:18.334 } 00:27:18.334 ] 00:27:18.334 } 00:27:18.334 ] 00:27:18.334 } 00:27:18.334 [2024-07-12 10:41:12.202551] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:18.334 [2024-07-12 10:41:12.202894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137618 ] 00:27:18.592 [2024-07-12 10:41:12.370292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.850 [2024-07-12 10:41:12.541080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.041  Copying: 60/60 [kB] (average 58 MBps) 00:27:20.041 00:27:20.042 10:41:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:20.042 10:41:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:20.042 10:41:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:20.042 10:41:13 -- dd/common.sh@11 -- # local nvme_ref= 00:27:20.042 10:41:13 -- dd/common.sh@12 -- # local size=61440 00:27:20.042 10:41:13 -- dd/common.sh@14 -- # local bs=1048576 00:27:20.042 10:41:13 -- dd/common.sh@15 -- # local count=1 00:27:20.042 10:41:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:20.042 10:41:13 -- dd/common.sh@18 -- # gen_conf 00:27:20.042 10:41:13 -- dd/common.sh@31 -- # xtrace_disable 00:27:20.042 10:41:13 -- common/autotest_common.sh@10 -- # set +x 00:27:20.042 [2024-07-12 10:41:13.787559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:20.042 [2024-07-12 10:41:13.788185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137646 ] 00:27:20.042 { 00:27:20.042 "subsystems": [ 00:27:20.042 { 00:27:20.042 "subsystem": "bdev", 00:27:20.042 "config": [ 00:27:20.042 { 00:27:20.042 "params": { 00:27:20.042 "trtype": "pcie", 00:27:20.042 "traddr": "0000:00:06.0", 00:27:20.042 "name": "Nvme0" 00:27:20.042 }, 00:27:20.042 "method": "bdev_nvme_attach_controller" 00:27:20.042 }, 00:27:20.042 { 00:27:20.042 "method": "bdev_wait_for_examine" 00:27:20.042 } 00:27:20.042 ] 00:27:20.042 } 00:27:20.042 ] 00:27:20.042 } 00:27:20.042 [2024-07-12 10:41:13.942452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.300 [2024-07-12 10:41:14.109421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.490  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:21.490 00:27:21.490 10:41:15 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:21.490 10:41:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:21.490 10:41:15 -- dd/basic_rw.sh@23 -- # count=7 00:27:21.490 10:41:15 -- dd/basic_rw.sh@24 -- # count=7 00:27:21.490 10:41:15 -- dd/basic_rw.sh@25 -- # size=57344 00:27:21.490 10:41:15 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:21.490 10:41:15 -- dd/common.sh@98 -- # xtrace_disable 00:27:21.490 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:22.057 10:41:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:22.057 10:41:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:22.057 10:41:15 -- dd/common.sh@31 -- # xtrace_disable 00:27:22.057 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:27:22.057 [2024-07-12 10:41:15.938275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:22.057 [2024-07-12 10:41:15.938672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137678 ] 00:27:22.057 { 00:27:22.057 "subsystems": [ 00:27:22.057 { 00:27:22.057 "subsystem": "bdev", 00:27:22.057 "config": [ 00:27:22.057 { 00:27:22.057 "params": { 00:27:22.057 "trtype": "pcie", 00:27:22.057 "traddr": "0000:00:06.0", 00:27:22.057 "name": "Nvme0" 00:27:22.057 }, 00:27:22.057 "method": "bdev_nvme_attach_controller" 00:27:22.057 }, 00:27:22.057 { 00:27:22.057 "method": "bdev_wait_for_examine" 00:27:22.057 } 00:27:22.057 ] 00:27:22.057 } 00:27:22.057 ] 00:27:22.057 } 00:27:22.316 [2024-07-12 10:41:16.105924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.574 [2024-07-12 10:41:16.273347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.766  Copying: 56/56 [kB] (average 54 MBps) 00:27:23.766 00:27:23.766 10:41:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:23.766 10:41:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:23.766 10:41:17 -- dd/common.sh@31 -- # xtrace_disable 00:27:23.766 10:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:23.766 [2024-07-12 10:41:17.509116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:23.766 [2024-07-12 10:41:17.509664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137698 ] 00:27:23.766 { 00:27:23.766 "subsystems": [ 00:27:23.766 { 00:27:23.766 "subsystem": "bdev", 00:27:23.766 "config": [ 00:27:23.766 { 00:27:23.766 "params": { 00:27:23.766 "trtype": "pcie", 00:27:23.766 "traddr": "0000:00:06.0", 00:27:23.766 "name": "Nvme0" 00:27:23.766 }, 00:27:23.766 "method": "bdev_nvme_attach_controller" 00:27:23.766 }, 00:27:23.766 { 00:27:23.766 "method": "bdev_wait_for_examine" 00:27:23.766 } 00:27:23.766 ] 00:27:23.766 } 00:27:23.766 ] 00:27:23.766 } 00:27:23.766 [2024-07-12 10:41:17.666242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.024 [2024-07-12 10:41:17.828103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.216  Copying: 56/56 [kB] (average 27 MBps) 00:27:25.216 00:27:25.216 10:41:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:25.216 10:41:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:25.216 10:41:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:25.216 10:41:19 -- dd/common.sh@11 -- # local nvme_ref= 00:27:25.216 10:41:19 -- dd/common.sh@12 -- # local size=57344 00:27:25.216 10:41:19 -- dd/common.sh@14 -- # local bs=1048576 00:27:25.216 10:41:19 -- dd/common.sh@15 -- # local count=1 00:27:25.216 10:41:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:25.216 10:41:19 -- dd/common.sh@18 -- # gen_conf 00:27:25.216 10:41:19 -- dd/common.sh@31 -- # xtrace_disable 00:27:25.216 10:41:19 -- common/autotest_common.sh@10 -- # set +x 00:27:25.474 { 00:27:25.474 "subsystems": [ 00:27:25.475 { 00:27:25.475 "subsystem": "bdev", 00:27:25.475 "config": [ 00:27:25.475 { 00:27:25.475 "params": { 00:27:25.475 "trtype": "pcie", 00:27:25.475 "traddr": "0000:00:06.0", 00:27:25.475 "name": "Nvme0" 00:27:25.475 }, 00:27:25.475 "method": "bdev_nvme_attach_controller" 00:27:25.475 }, 00:27:25.475 { 00:27:25.475 "method": "bdev_wait_for_examine" 00:27:25.475 } 00:27:25.475 ] 00:27:25.475 } 00:27:25.475 ] 00:27:25.475 } 00:27:25.475 [2024-07-12 10:41:19.173064] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:25.475 [2024-07-12 10:41:19.173403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137748 ] 00:27:25.475 [2024-07-12 10:41:19.341636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.733 [2024-07-12 10:41:19.512026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.962  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:26.962 00:27:26.962 10:41:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:26.962 10:41:20 -- dd/basic_rw.sh@23 -- # count=7 00:27:26.962 10:41:20 -- dd/basic_rw.sh@24 -- # count=7 00:27:26.962 10:41:20 -- dd/basic_rw.sh@25 -- # size=57344 00:27:26.962 10:41:20 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:26.962 10:41:20 -- dd/common.sh@98 -- # xtrace_disable 00:27:26.962 10:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:27.526 10:41:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:27.526 10:41:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:27.526 10:41:21 -- dd/common.sh@31 -- # xtrace_disable 00:27:27.526 10:41:21 -- common/autotest_common.sh@10 -- # set +x 00:27:27.526 { 00:27:27.526 "subsystems": [ 00:27:27.526 { 00:27:27.526 "subsystem": "bdev", 00:27:27.526 "config": [ 00:27:27.526 { 00:27:27.526 "params": { 00:27:27.526 "trtype": "pcie", 00:27:27.526 "traddr": "0000:00:06.0", 00:27:27.526 "name": "Nvme0" 00:27:27.526 }, 00:27:27.526 "method": "bdev_nvme_attach_controller" 00:27:27.526 }, 00:27:27.526 { 00:27:27.526 "method": "bdev_wait_for_examine" 00:27:27.526 } 00:27:27.526 ] 00:27:27.526 } 00:27:27.526 ] 00:27:27.526 } 00:27:27.526 [2024-07-12 10:41:21.286431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:27.526 [2024-07-12 10:41:21.287554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137775 ] 00:27:27.784 [2024-07-12 10:41:21.453017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.784 [2024-07-12 10:41:21.612098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.976  Copying: 56/56 [kB] (average 54 MBps) 00:27:28.976 00:27:28.976 10:41:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:28.976 10:41:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:28.976 10:41:22 -- dd/common.sh@31 -- # xtrace_disable 00:27:28.976 10:41:22 -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 [2024-07-12 10:41:22.938921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:29.234 [2024-07-12 10:41:22.939375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137807 ] 00:27:29.234 { 00:27:29.234 "subsystems": [ 00:27:29.234 { 00:27:29.234 "subsystem": "bdev", 00:27:29.234 "config": [ 00:27:29.234 { 00:27:29.234 "params": { 00:27:29.234 "trtype": "pcie", 00:27:29.234 "traddr": "0000:00:06.0", 00:27:29.234 "name": "Nvme0" 00:27:29.234 }, 00:27:29.234 "method": "bdev_nvme_attach_controller" 00:27:29.234 }, 00:27:29.234 { 00:27:29.234 "method": "bdev_wait_for_examine" 00:27:29.234 } 00:27:29.234 ] 00:27:29.234 } 00:27:29.234 ] 00:27:29.234 } 00:27:29.234 [2024-07-12 10:41:23.106571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.492 [2024-07-12 10:41:23.275229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.684  Copying: 56/56 [kB] (average 54 MBps) 00:27:30.684 00:27:30.684 10:41:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:30.684 10:41:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:30.684 10:41:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:30.684 10:41:24 -- dd/common.sh@11 -- # local nvme_ref= 00:27:30.684 10:41:24 -- dd/common.sh@12 -- # local size=57344 00:27:30.684 10:41:24 -- dd/common.sh@14 -- # local bs=1048576 00:27:30.684 10:41:24 -- dd/common.sh@15 -- # local count=1 00:27:30.684 10:41:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:30.684 10:41:24 -- dd/common.sh@18 -- # gen_conf 00:27:30.684 10:41:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:30.684 10:41:24 -- common/autotest_common.sh@10 -- # set +x 00:27:30.684 [2024-07-12 10:41:24.533978] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:30.684 [2024-07-12 10:41:24.534387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137828 ] 00:27:30.684 { 00:27:30.684 "subsystems": [ 00:27:30.684 { 00:27:30.684 "subsystem": "bdev", 00:27:30.684 "config": [ 00:27:30.684 { 00:27:30.684 "params": { 00:27:30.684 "trtype": "pcie", 00:27:30.684 "traddr": "0000:00:06.0", 00:27:30.684 "name": "Nvme0" 00:27:30.684 }, 00:27:30.684 "method": "bdev_nvme_attach_controller" 00:27:30.684 }, 00:27:30.684 { 00:27:30.684 "method": "bdev_wait_for_examine" 00:27:30.684 } 00:27:30.684 ] 00:27:30.684 } 00:27:30.684 ] 00:27:30.684 } 00:27:30.942 [2024-07-12 10:41:24.702616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.201 [2024-07-12 10:41:24.872207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.433  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:32.433 00:27:32.433 10:41:26 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:32.433 10:41:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:32.433 10:41:26 -- dd/basic_rw.sh@23 -- # count=3 00:27:32.433 10:41:26 -- dd/basic_rw.sh@24 -- # count=3 00:27:32.433 10:41:26 -- dd/basic_rw.sh@25 -- # size=49152 00:27:32.433 10:41:26 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:32.433 10:41:26 -- dd/common.sh@98 -- # xtrace_disable 00:27:32.433 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:27:32.691 10:41:26 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:32.691 10:41:26 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:32.691 10:41:26 -- dd/common.sh@31 -- # xtrace_disable 00:27:32.691 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:27:32.949 [2024-07-12 10:41:26.627970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:32.949 [2024-07-12 10:41:26.628501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137860 ] 00:27:32.949 { 00:27:32.949 "subsystems": [ 00:27:32.949 { 00:27:32.949 "subsystem": "bdev", 00:27:32.949 "config": [ 00:27:32.949 { 00:27:32.949 "params": { 00:27:32.949 "trtype": "pcie", 00:27:32.949 "traddr": "0000:00:06.0", 00:27:32.949 "name": "Nvme0" 00:27:32.949 }, 00:27:32.949 "method": "bdev_nvme_attach_controller" 00:27:32.949 }, 00:27:32.949 { 00:27:32.949 "method": "bdev_wait_for_examine" 00:27:32.949 } 00:27:32.949 ] 00:27:32.949 } 00:27:32.949 ] 00:27:32.949 } 00:27:32.949 [2024-07-12 10:41:26.782502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.207 [2024-07-12 10:41:26.941570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.401  Copying: 48/48 [kB] (average 46 MBps) 00:27:34.401 00:27:34.401 10:41:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:34.401 10:41:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:34.401 10:41:28 -- dd/common.sh@31 -- # xtrace_disable 00:27:34.401 10:41:28 -- common/autotest_common.sh@10 -- # set +x 00:27:34.401 [2024-07-12 10:41:28.187027] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:34.401 [2024-07-12 10:41:28.187466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137887 ] 00:27:34.401 { 00:27:34.401 "subsystems": [ 00:27:34.401 { 00:27:34.401 "subsystem": "bdev", 00:27:34.401 "config": [ 00:27:34.401 { 00:27:34.401 "params": { 00:27:34.401 "trtype": "pcie", 00:27:34.401 "traddr": "0000:00:06.0", 00:27:34.401 "name": "Nvme0" 00:27:34.401 }, 00:27:34.401 "method": "bdev_nvme_attach_controller" 00:27:34.401 }, 00:27:34.401 { 00:27:34.401 "method": "bdev_wait_for_examine" 00:27:34.401 } 00:27:34.401 ] 00:27:34.401 } 00:27:34.401 ] 00:27:34.401 } 00:27:34.659 [2024-07-12 10:41:28.354264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.660 [2024-07-12 10:41:28.526186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.162  Copying: 48/48 [kB] (average 46 MBps) 00:27:36.162 00:27:36.162 10:41:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:36.162 10:41:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:36.162 10:41:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:36.162 10:41:29 -- dd/common.sh@11 -- # local nvme_ref= 00:27:36.162 10:41:29 -- dd/common.sh@12 -- # local size=49152 00:27:36.162 10:41:29 -- dd/common.sh@14 -- # local bs=1048576 00:27:36.162 10:41:29 -- dd/common.sh@15 -- # local count=1 00:27:36.162 10:41:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:36.162 10:41:29 -- dd/common.sh@18 -- # gen_conf 00:27:36.162 10:41:29 -- dd/common.sh@31 -- # xtrace_disable 00:27:36.162 10:41:29 -- common/autotest_common.sh@10 -- # set +x 00:27:36.162 [2024-07-12 10:41:29.875710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:36.162 [2024-07-12 10:41:29.876064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137935 ] 00:27:36.162 { 00:27:36.162 "subsystems": [ 00:27:36.162 { 00:27:36.162 "subsystem": "bdev", 00:27:36.162 "config": [ 00:27:36.162 { 00:27:36.162 "params": { 00:27:36.162 "trtype": "pcie", 00:27:36.162 "traddr": "0000:00:06.0", 00:27:36.162 "name": "Nvme0" 00:27:36.162 }, 00:27:36.162 "method": "bdev_nvme_attach_controller" 00:27:36.162 }, 00:27:36.162 { 00:27:36.162 "method": "bdev_wait_for_examine" 00:27:36.162 } 00:27:36.162 ] 00:27:36.162 } 00:27:36.162 ] 00:27:36.162 } 00:27:36.162 [2024-07-12 10:41:30.045263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.421 [2024-07-12 10:41:30.202841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.616  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:37.616 00:27:37.616 10:41:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:37.616 10:41:31 -- dd/basic_rw.sh@23 -- # count=3 00:27:37.616 10:41:31 -- dd/basic_rw.sh@24 -- # count=3 00:27:37.616 10:41:31 -- dd/basic_rw.sh@25 -- # size=49152 00:27:37.616 10:41:31 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:37.616 10:41:31 -- dd/common.sh@98 -- # xtrace_disable 00:27:37.616 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:27:38.183 10:41:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:38.183 10:41:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:38.183 10:41:31 -- dd/common.sh@31 -- # xtrace_disable 00:27:38.183 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:27:38.183 [2024-07-12 10:41:31.888814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:38.183 [2024-07-12 10:41:31.889360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137963 ] 00:27:38.183 { 00:27:38.183 "subsystems": [ 00:27:38.183 { 00:27:38.183 "subsystem": "bdev", 00:27:38.183 "config": [ 00:27:38.183 { 00:27:38.183 "params": { 00:27:38.183 "trtype": "pcie", 00:27:38.183 "traddr": "0000:00:06.0", 00:27:38.183 "name": "Nvme0" 00:27:38.183 }, 00:27:38.183 "method": "bdev_nvme_attach_controller" 00:27:38.183 }, 00:27:38.183 { 00:27:38.183 "method": "bdev_wait_for_examine" 00:27:38.183 } 00:27:38.183 ] 00:27:38.183 } 00:27:38.183 ] 00:27:38.183 } 00:27:38.183 [2024-07-12 10:41:32.042920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.441 [2024-07-12 10:41:32.209140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.633  Copying: 48/48 [kB] (average 46 MBps) 00:27:39.633 00:27:39.633 10:41:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:39.633 10:41:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:39.633 10:41:33 -- dd/common.sh@31 -- # xtrace_disable 00:27:39.633 10:41:33 -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 [2024-07-12 10:41:33.530766] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:39.633 [2024-07-12 10:41:33.531111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137987 ] 00:27:39.633 { 00:27:39.633 "subsystems": [ 00:27:39.633 { 00:27:39.633 "subsystem": "bdev", 00:27:39.633 "config": [ 00:27:39.633 { 00:27:39.633 "params": { 00:27:39.633 "trtype": "pcie", 00:27:39.633 "traddr": "0000:00:06.0", 00:27:39.633 "name": "Nvme0" 00:27:39.633 }, 00:27:39.633 "method": "bdev_nvme_attach_controller" 00:27:39.633 }, 00:27:39.633 { 00:27:39.633 "method": "bdev_wait_for_examine" 00:27:39.633 } 00:27:39.633 ] 00:27:39.633 } 00:27:39.633 ] 00:27:39.633 } 00:27:39.891 [2024-07-12 10:41:33.697625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.150 [2024-07-12 10:41:33.863173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.342  Copying: 48/48 [kB] (average 46 MBps) 00:27:41.342 00:27:41.342 10:41:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:41.342 10:41:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:41.342 10:41:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:41.342 10:41:35 -- dd/common.sh@11 -- # local nvme_ref= 00:27:41.342 10:41:35 -- dd/common.sh@12 -- # local size=49152 00:27:41.342 10:41:35 -- dd/common.sh@14 -- # local bs=1048576 00:27:41.342 10:41:35 -- dd/common.sh@15 -- # local count=1 00:27:41.342 10:41:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:41.342 10:41:35 -- dd/common.sh@18 -- # gen_conf 00:27:41.342 10:41:35 -- dd/common.sh@31 -- # xtrace_disable 00:27:41.342 10:41:35 -- common/autotest_common.sh@10 -- # set +x 00:27:41.342 { 00:27:41.342 "subsystems": [ 00:27:41.342 { 00:27:41.342 "subsystem": "bdev", 00:27:41.342 "config": [ 00:27:41.342 { 00:27:41.342 "params": { 00:27:41.342 "trtype": "pcie", 00:27:41.342 "traddr": "0000:00:06.0", 00:27:41.342 "name": "Nvme0" 00:27:41.342 }, 00:27:41.342 "method": "bdev_nvme_attach_controller" 00:27:41.342 }, 00:27:41.342 { 00:27:41.342 "method": "bdev_wait_for_examine" 00:27:41.342 } 00:27:41.342 ] 00:27:41.342 } 00:27:41.342 ] 00:27:41.342 } 00:27:41.342 [2024-07-12 10:41:35.112528] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:41.342 [2024-07-12 10:41:35.112850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138016 ] 00:27:41.600 [2024-07-12 10:41:35.278078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.600 [2024-07-12 10:41:35.442313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.794  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:42.794 00:27:43.053 ************************************ 00:27:43.053 END TEST dd_rw 00:27:43.053 ************************************ 00:27:43.053 00:27:43.053 real 0m32.165s 00:27:43.053 user 0m26.464s 00:27:43.053 sys 0m4.448s 00:27:43.053 10:41:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.053 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:27:43.053 10:41:36 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:43.053 10:41:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:43.053 10:41:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.053 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:27:43.053 ************************************ 00:27:43.053 START TEST dd_rw_offset 00:27:43.053 ************************************ 00:27:43.053 10:41:36 -- common/autotest_common.sh@1104 -- # basic_offset 00:27:43.053 10:41:36 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:43.053 10:41:36 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:43.053 10:41:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:43.053 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:27:43.053 10:41:36 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:43.054 10:41:36 -- dd/basic_rw.sh@56 -- # data=dabw22znio5iyyrrrof2mx0r72cxrhdesrkzpx224rzr77e90ukzfsyzmhok35aia8zvw3yef0q1416ho1ae5slszzpoqh4ioctybw1h7pfl7s8rlq97q2hzj32xd19133n3zzxdzez9m45lzpi2fgra19t3uaixilubtp06q52gnd89wf7ugpg72b069m3acu8188obhfw4ky5zk2d0yq8lu3frd9gruv9a7ygn2zdepitbox4r8rr7b66ra4wgxr5lg4g8dp2gebsxoo9q0mljoe1v7ib5a7ji4ah8r7j8rg473tmu70cnb7a7zgzft6n032a2onv2n4alhdljax87l8c89ibdykdcpytvynk6oup7vr6ieghbgy8154cdnxgcrowlfvxliqsl9oh8xurcnadgrdwr9fytriaby11ozyk9mwkjwe5r3acq1eqpn7rdqscclimede68678z9ai9d05kqrkd0cvzomi9jd8mnj77f9ubx74f45enztyhpe8z8h84nlcucw1wgurbka6pflshk4f98mcnzx3c3czv7jpvnoajv1mdhhhbb72kidg3nnwywyxatlkhfujxk02smcav2cpp9xwbxskjwblcfekrdt3ugeqph1jnznnhl5nyyrmld5wtn97fvtqusm1el4r7rosinrcv2k6ehmkqb69svc2joth31dnkviffwo654x7zlt05x2jl3t1d478hk5basc4xv4bnsack6b1z0j4mj260skl6htr8qgyrbjwr8vb3azz6d645d1lhtymzwxys0j88cdo4xg2g2dcjssn3eqkxa42nn4fxwc7ufu43uqb3khrnvnicpx7pdvg8hmyj0dwfkjx3p343j2dnbj98dws0ea8t7jy0zslmuqrxxi8ifaajrcjis7z0h5d0t5cwimi2jd103yf951ujbjmy7q0fbjbmha1aignjyepc9rfnf3bj2ack4iznz4k1zs12n7w4la58aewz9awg874eali1mq1mf37l0s1rm4ia07htjoz0osixinmowhj8x0hbmw5k684jv5xx8e0as27enfcc4u67is1psd9xvvac6f7trmuodw9h0l8lu3mbeecusmr0ob1lhiajtb0k4angus2lrbk029coqs8ij0pd59izbgvs9i9qrlb45q007r7u7pml7dqix3vvu4ol4t1ipu1l9rctxllhddcnkmcvpm2p9rwxsq4nu12ge79ecur2be1hfrgqfu4mro3uucnidoiyg5lftch1vk62gc1nvp58sdsunm9nnpo5p9xy06j1pipla6ercz1849v9s050oixbb25a7b2zoho0d1flc14cfo1mxjgfrxt23emj2cxuoscxr7hqehcjr47isu38hwz1nns9aaguy7o8a02wrq3tvhyjn79eq71148ygbqofjz6iltqvxtqgolf2cwxuppluqakb5jie7bmvsbysw7csuaay1lkmoqg2ejaps9kmpyuxd8x3m33upqub9hhfi96zt1c762luis1inhp69potbqxmc833quienqdmmgzbuvt1b417wbfnqnp6i68fqtgebxci74isysosdscc8czctmgxyhuhaxdvnzirl0jlr06foydvi645sbaz7g86w43o5jolqe2i3ii09b3kuslqufx8nh1lfy639qauozo1xfpuzitlm2y3bhbw09gjg5qef4org8cj0s1nudo9l966wq7n4rddqxygm78i7xq4emsipaiowm63ul6vurrxy7boac0qzv4d6i30d0mrubvt5eth3a6zrn3gyfmmpjefj9ftu3gtb6biap7oyz0jx8deztc2okm368dgbzype49delfuppwkzve6wlbgngnccmc8h9cq9gncc4pra5nxjmu7qxbmrtne0f0g56wby0tkl66sxoiu803md425p5wuohlg93ardtju1q9b81sv5m9usdogi8msq8i4c8zsql7d4adhttcifwu58ps507wa5vhbjlex5mjxj6z7zherrs85mas10nui9coxh6reqgeekhc92gdddg49huu2i2emx5cv9i7dhdvuqduzzomxs8ghgzxuff1b9ph467thaamk96hpjgrsig2fpaws8xagn5wf1wu16ofbitrxkm4c1jrg4rvr91lw64bx1wgrhg0l1yn1ziodjmczwf88thd5j5eqeymmkry2f82g90a207w1ivzpffqgi6qono8hpj7bv1zpp2n0wykpv0jyeiacdjwdetcuhp85xlyyuqq32qaukzykip5s9uw9qusehbno59u5teau4y73udixs4mpfnk97fjsm8xmlw9n04gpr7z24fmdqc6kdh325iapbocydq5ms1f4pchxers9hht0bghqt4u1s8fpnph55hm2576poulac5rp0pv5j6m8vh66j8gprqda99utxon7j4wbyfj53oow27blwaah4vvgnjp6pneniiqmumojvcfvapnlr0g0bmh6wo3lmk8lttpyrbpjqexmmwh5aenr86gbpumqwvqdknif2skkm5ctoku84x9amrebfc25je17o69f4sft2tsr3mw4qhu4ko9x59u2qwm12g1h55x6azjlokpjun813pi271iwewy8pdwmxgqgohrjrpwsl5el4ir8k4ehohog034cf6kso3a0yjh2ymrgh90dbsiz4doupou34kts1z9838x5ih19dv3osm5ipe9wab2j5yzw10x71h3oquwbh896wukljcpkrgku5346a17g206gzlej8zyamq1o7hqzijfbfxt1vc8tjnbvknky7uo5grveytfkghnew2igkh9r1t1r0nemv9bn9076xwxghm4fxfvtk69ybrjbcbmgjbadbay0mxzikhun1hrkb8r6q0k78i7e01helkf747ziopoo3gojcvruqlqnhsgxbh6uxcx87z80m1137u6x4u8qvym789zhsqa56wluven7lr24g88ect0j59swrl523zso7bxnexnmwcx8bt21dgl2sidtgu6eich6yfcq0wr9gt2ldq3nzq6im9ekgzwu6nuooxe0ssjbt9z9gu9bybfsjelqp9to1omnyowj57orbgxi3a7dmw295h3tztumq9kelq3lfr6gs9yu041bxs0xtdpzontsde3tpnpb6u7dj8xm5x0sr13w7tx7obpxyrz2k5ar6tpxxfw6vzdgik7js1fmupcn3jnyu6c9f8nxhsbby84bsbm2zgr014b18swq8m76yos4zufu3xnrcxstmmsvllkeqh6w8m9piuyw9yz1riqtq4htfult2xne2y5umn5gp431eqhj67amg72piekbtmlw6p9sdvf2b1wum8ou4p4uiqfv2w3ywdcv3m0gbmr5pr7hf8ec0c2h74ff98mj99u5ap9172oz7fzxbyu4p65ra9zn7jkj8kq33any8caszvmphzq5z98i4sx403joc03tx9q7478gi8puvbnhmfwjqus19xlmqhlwcaakyld3wvk5y8ct8ak0bpzhsin6ajdqhee4fdr8oamex1rfviujcuqmbvx2kque087l4l0s8g72rxxe7ee46cruxs7ksccghymhu6ukbdxf8zs92auurte1inlinauky52impxka8b9vcznyztqwb10a99qf8nj6vexo71xnim4ph2ay8j58oow4cnym2jgpw9g1jnbv9avoirich02idmfia289ddeplj8z3ef5a88zv0b6s3xorweoihb2nudkdb07unm56ga9hf43s4qylw7xf0lk5keqz7ath7bx7kvv88kxvrltkad0mzsgg9zcs3st81nxunxl8d9n5w1vaxomcohmk20esqmcgohknz3ypqyxjo9p1v9xotopdzljjev9ccwtkub91fqbcxc1ifdux87nbxl89hmdbn2q3a7qqogi2mz08gkrb3jrom9wgx0dcs3w5zmjazs8n7tkjtxn9chi54zymdetjfww2dupki1svq1ikwvzpzdaxekasm0o849emg5qv6k3i0ytp3kkbnyp2ocx5xwrcfqngcxq5pibmxs 00:27:43.054 10:41:36 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:43.054 10:41:36 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:43.054 10:41:36 -- dd/common.sh@31 -- # xtrace_disable 00:27:43.054 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:27:43.054 [2024-07-12 10:41:36.898565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:43.054 [2024-07-12 10:41:36.898952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138065 ] 00:27:43.054 { 00:27:43.054 "subsystems": [ 00:27:43.054 { 00:27:43.054 "subsystem": "bdev", 00:27:43.054 "config": [ 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "trtype": "pcie", 00:27:43.054 "traddr": "0000:00:06.0", 00:27:43.054 "name": "Nvme0" 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 }, 00:27:43.054 { 00:27:43.054 "method": "bdev_wait_for_examine" 00:27:43.054 } 00:27:43.054 ] 00:27:43.054 } 00:27:43.054 ] 00:27:43.054 } 00:27:43.313 [2024-07-12 10:41:37.067222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.571 [2024-07-12 10:41:37.228992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.764  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:44.764 00:27:44.764 10:41:38 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:44.764 10:41:38 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:44.764 10:41:38 -- dd/common.sh@31 -- # xtrace_disable 00:27:44.764 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:27:44.764 [2024-07-12 10:41:38.475098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:44.764 [2024-07-12 10:41:38.475470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138091 ] 00:27:44.764 { 00:27:44.764 "subsystems": [ 00:27:44.764 { 00:27:44.764 "subsystem": "bdev", 00:27:44.764 "config": [ 00:27:44.764 { 00:27:44.764 "params": { 00:27:44.764 "trtype": "pcie", 00:27:44.764 "traddr": "0000:00:06.0", 00:27:44.764 "name": "Nvme0" 00:27:44.764 }, 00:27:44.764 "method": "bdev_nvme_attach_controller" 00:27:44.764 }, 00:27:44.764 { 00:27:44.764 "method": "bdev_wait_for_examine" 00:27:44.764 } 00:27:44.764 ] 00:27:44.764 } 00:27:44.764 ] 00:27:44.764 } 00:27:44.764 [2024-07-12 10:41:38.641338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.022 [2024-07-12 10:41:38.799267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.215  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:46.215 00:27:46.215 10:41:40 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:46.216 10:41:40 -- dd/basic_rw.sh@72 -- # [[ dabw22znio5iyyrrrof2mx0r72cxrhdesrkzpx224rzr77e90ukzfsyzmhok35aia8zvw3yef0q1416ho1ae5slszzpoqh4ioctybw1h7pfl7s8rlq97q2hzj32xd19133n3zzxdzez9m45lzpi2fgra19t3uaixilubtp06q52gnd89wf7ugpg72b069m3acu8188obhfw4ky5zk2d0yq8lu3frd9gruv9a7ygn2zdepitbox4r8rr7b66ra4wgxr5lg4g8dp2gebsxoo9q0mljoe1v7ib5a7ji4ah8r7j8rg473tmu70cnb7a7zgzft6n032a2onv2n4alhdljax87l8c89ibdykdcpytvynk6oup7vr6ieghbgy8154cdnxgcrowlfvxliqsl9oh8xurcnadgrdwr9fytriaby11ozyk9mwkjwe5r3acq1eqpn7rdqscclimede68678z9ai9d05kqrkd0cvzomi9jd8mnj77f9ubx74f45enztyhpe8z8h84nlcucw1wgurbka6pflshk4f98mcnzx3c3czv7jpvnoajv1mdhhhbb72kidg3nnwywyxatlkhfujxk02smcav2cpp9xwbxskjwblcfekrdt3ugeqph1jnznnhl5nyyrmld5wtn97fvtqusm1el4r7rosinrcv2k6ehmkqb69svc2joth31dnkviffwo654x7zlt05x2jl3t1d478hk5basc4xv4bnsack6b1z0j4mj260skl6htr8qgyrbjwr8vb3azz6d645d1lhtymzwxys0j88cdo4xg2g2dcjssn3eqkxa42nn4fxwc7ufu43uqb3khrnvnicpx7pdvg8hmyj0dwfkjx3p343j2dnbj98dws0ea8t7jy0zslmuqrxxi8ifaajrcjis7z0h5d0t5cwimi2jd103yf951ujbjmy7q0fbjbmha1aignjyepc9rfnf3bj2ack4iznz4k1zs12n7w4la58aewz9awg874eali1mq1mf37l0s1rm4ia07htjoz0osixinmowhj8x0hbmw5k684jv5xx8e0as27enfcc4u67is1psd9xvvac6f7trmuodw9h0l8lu3mbeecusmr0ob1lhiajtb0k4angus2lrbk029coqs8ij0pd59izbgvs9i9qrlb45q007r7u7pml7dqix3vvu4ol4t1ipu1l9rctxllhddcnkmcvpm2p9rwxsq4nu12ge79ecur2be1hfrgqfu4mro3uucnidoiyg5lftch1vk62gc1nvp58sdsunm9nnpo5p9xy06j1pipla6ercz1849v9s050oixbb25a7b2zoho0d1flc14cfo1mxjgfrxt23emj2cxuoscxr7hqehcjr47isu38hwz1nns9aaguy7o8a02wrq3tvhyjn79eq71148ygbqofjz6iltqvxtqgolf2cwxuppluqakb5jie7bmvsbysw7csuaay1lkmoqg2ejaps9kmpyuxd8x3m33upqub9hhfi96zt1c762luis1inhp69potbqxmc833quienqdmmgzbuvt1b417wbfnqnp6i68fqtgebxci74isysosdscc8czctmgxyhuhaxdvnzirl0jlr06foydvi645sbaz7g86w43o5jolqe2i3ii09b3kuslqufx8nh1lfy639qauozo1xfpuzitlm2y3bhbw09gjg5qef4org8cj0s1nudo9l966wq7n4rddqxygm78i7xq4emsipaiowm63ul6vurrxy7boac0qzv4d6i30d0mrubvt5eth3a6zrn3gyfmmpjefj9ftu3gtb6biap7oyz0jx8deztc2okm368dgbzype49delfuppwkzve6wlbgngnccmc8h9cq9gncc4pra5nxjmu7qxbmrtne0f0g56wby0tkl66sxoiu803md425p5wuohlg93ardtju1q9b81sv5m9usdogi8msq8i4c8zsql7d4adhttcifwu58ps507wa5vhbjlex5mjxj6z7zherrs85mas10nui9coxh6reqgeekhc92gdddg49huu2i2emx5cv9i7dhdvuqduzzomxs8ghgzxuff1b9ph467thaamk96hpjgrsig2fpaws8xagn5wf1wu16ofbitrxkm4c1jrg4rvr91lw64bx1wgrhg0l1yn1ziodjmczwf88thd5j5eqeymmkry2f82g90a207w1ivzpffqgi6qono8hpj7bv1zpp2n0wykpv0jyeiacdjwdetcuhp85xlyyuqq32qaukzykip5s9uw9qusehbno59u5teau4y73udixs4mpfnk97fjsm8xmlw9n04gpr7z24fmdqc6kdh325iapbocydq5ms1f4pchxers9hht0bghqt4u1s8fpnph55hm2576poulac5rp0pv5j6m8vh66j8gprqda99utxon7j4wbyfj53oow27blwaah4vvgnjp6pneniiqmumojvcfvapnlr0g0bmh6wo3lmk8lttpyrbpjqexmmwh5aenr86gbpumqwvqdknif2skkm5ctoku84x9amrebfc25je17o69f4sft2tsr3mw4qhu4ko9x59u2qwm12g1h55x6azjlokpjun813pi271iwewy8pdwmxgqgohrjrpwsl5el4ir8k4ehohog034cf6kso3a0yjh2ymrgh90dbsiz4doupou34kts1z9838x5ih19dv3osm5ipe9wab2j5yzw10x71h3oquwbh896wukljcpkrgku5346a17g206gzlej8zyamq1o7hqzijfbfxt1vc8tjnbvknky7uo5grveytfkghnew2igkh9r1t1r0nemv9bn9076xwxghm4fxfvtk69ybrjbcbmgjbadbay0mxzikhun1hrkb8r6q0k78i7e01helkf747ziopoo3gojcvruqlqnhsgxbh6uxcx87z80m1137u6x4u8qvym789zhsqa56wluven7lr24g88ect0j59swrl523zso7bxnexnmwcx8bt21dgl2sidtgu6eich6yfcq0wr9gt2ldq3nzq6im9ekgzwu6nuooxe0ssjbt9z9gu9bybfsjelqp9to1omnyowj57orbgxi3a7dmw295h3tztumq9kelq3lfr6gs9yu041bxs0xtdpzontsde3tpnpb6u7dj8xm5x0sr13w7tx7obpxyrz2k5ar6tpxxfw6vzdgik7js1fmupcn3jnyu6c9f8nxhsbby84bsbm2zgr014b18swq8m76yos4zufu3xnrcxstmmsvllkeqh6w8m9piuyw9yz1riqtq4htfult2xne2y5umn5gp431eqhj67amg72piekbtmlw6p9sdvf2b1wum8ou4p4uiqfv2w3ywdcv3m0gbmr5pr7hf8ec0c2h74ff98mj99u5ap9172oz7fzxbyu4p65ra9zn7jkj8kq33any8caszvmphzq5z98i4sx403joc03tx9q7478gi8puvbnhmfwjqus19xlmqhlwcaakyld3wvk5y8ct8ak0bpzhsin6ajdqhee4fdr8oamex1rfviujcuqmbvx2kque087l4l0s8g72rxxe7ee46cruxs7ksccghymhu6ukbdxf8zs92auurte1inlinauky52impxka8b9vcznyztqwb10a99qf8nj6vexo71xnim4ph2ay8j58oow4cnym2jgpw9g1jnbv9avoirich02idmfia289ddeplj8z3ef5a88zv0b6s3xorweoihb2nudkdb07unm56ga9hf43s4qylw7xf0lk5keqz7ath7bx7kvv88kxvrltkad0mzsgg9zcs3st81nxunxl8d9n5w1vaxomcohmk20esqmcgohknz3ypqyxjo9p1v9xotopdzljjev9ccwtkub91fqbcxc1ifdux87nbxl89hmdbn2q3a7qqogi2mz08gkrb3jrom9wgx0dcs3w5zmjazs8n7tkjtxn9chi54zymdetjfww2dupki1svq1ikwvzpzdaxekasm0o849emg5qv6k3i0ytp3kkbnyp2ocx5xwrcfqngcxq5pibmxs == \d\a\b\w\2\2\z\n\i\o\5\i\y\y\r\r\r\o\f\2\m\x\0\r\7\2\c\x\r\h\d\e\s\r\k\z\p\x\2\2\4\r\z\r\7\7\e\9\0\u\k\z\f\s\y\z\m\h\o\k\3\5\a\i\a\8\z\v\w\3\y\e\f\0\q\1\4\1\6\h\o\1\a\e\5\s\l\s\z\z\p\o\q\h\4\i\o\c\t\y\b\w\1\h\7\p\f\l\7\s\8\r\l\q\9\7\q\2\h\z\j\3\2\x\d\1\9\1\3\3\n\3\z\z\x\d\z\e\z\9\m\4\5\l\z\p\i\2\f\g\r\a\1\9\t\3\u\a\i\x\i\l\u\b\t\p\0\6\q\5\2\g\n\d\8\9\w\f\7\u\g\p\g\7\2\b\0\6\9\m\3\a\c\u\8\1\8\8\o\b\h\f\w\4\k\y\5\z\k\2\d\0\y\q\8\l\u\3\f\r\d\9\g\r\u\v\9\a\7\y\g\n\2\z\d\e\p\i\t\b\o\x\4\r\8\r\r\7\b\6\6\r\a\4\w\g\x\r\5\l\g\4\g\8\d\p\2\g\e\b\s\x\o\o\9\q\0\m\l\j\o\e\1\v\7\i\b\5\a\7\j\i\4\a\h\8\r\7\j\8\r\g\4\7\3\t\m\u\7\0\c\n\b\7\a\7\z\g\z\f\t\6\n\0\3\2\a\2\o\n\v\2\n\4\a\l\h\d\l\j\a\x\8\7\l\8\c\8\9\i\b\d\y\k\d\c\p\y\t\v\y\n\k\6\o\u\p\7\v\r\6\i\e\g\h\b\g\y\8\1\5\4\c\d\n\x\g\c\r\o\w\l\f\v\x\l\i\q\s\l\9\o\h\8\x\u\r\c\n\a\d\g\r\d\w\r\9\f\y\t\r\i\a\b\y\1\1\o\z\y\k\9\m\w\k\j\w\e\5\r\3\a\c\q\1\e\q\p\n\7\r\d\q\s\c\c\l\i\m\e\d\e\6\8\6\7\8\z\9\a\i\9\d\0\5\k\q\r\k\d\0\c\v\z\o\m\i\9\j\d\8\m\n\j\7\7\f\9\u\b\x\7\4\f\4\5\e\n\z\t\y\h\p\e\8\z\8\h\8\4\n\l\c\u\c\w\1\w\g\u\r\b\k\a\6\p\f\l\s\h\k\4\f\9\8\m\c\n\z\x\3\c\3\c\z\v\7\j\p\v\n\o\a\j\v\1\m\d\h\h\h\b\b\7\2\k\i\d\g\3\n\n\w\y\w\y\x\a\t\l\k\h\f\u\j\x\k\0\2\s\m\c\a\v\2\c\p\p\9\x\w\b\x\s\k\j\w\b\l\c\f\e\k\r\d\t\3\u\g\e\q\p\h\1\j\n\z\n\n\h\l\5\n\y\y\r\m\l\d\5\w\t\n\9\7\f\v\t\q\u\s\m\1\e\l\4\r\7\r\o\s\i\n\r\c\v\2\k\6\e\h\m\k\q\b\6\9\s\v\c\2\j\o\t\h\3\1\d\n\k\v\i\f\f\w\o\6\5\4\x\7\z\l\t\0\5\x\2\j\l\3\t\1\d\4\7\8\h\k\5\b\a\s\c\4\x\v\4\b\n\s\a\c\k\6\b\1\z\0\j\4\m\j\2\6\0\s\k\l\6\h\t\r\8\q\g\y\r\b\j\w\r\8\v\b\3\a\z\z\6\d\6\4\5\d\1\l\h\t\y\m\z\w\x\y\s\0\j\8\8\c\d\o\4\x\g\2\g\2\d\c\j\s\s\n\3\e\q\k\x\a\4\2\n\n\4\f\x\w\c\7\u\f\u\4\3\u\q\b\3\k\h\r\n\v\n\i\c\p\x\7\p\d\v\g\8\h\m\y\j\0\d\w\f\k\j\x\3\p\3\4\3\j\2\d\n\b\j\9\8\d\w\s\0\e\a\8\t\7\j\y\0\z\s\l\m\u\q\r\x\x\i\8\i\f\a\a\j\r\c\j\i\s\7\z\0\h\5\d\0\t\5\c\w\i\m\i\2\j\d\1\0\3\y\f\9\5\1\u\j\b\j\m\y\7\q\0\f\b\j\b\m\h\a\1\a\i\g\n\j\y\e\p\c\9\r\f\n\f\3\b\j\2\a\c\k\4\i\z\n\z\4\k\1\z\s\1\2\n\7\w\4\l\a\5\8\a\e\w\z\9\a\w\g\8\7\4\e\a\l\i\1\m\q\1\m\f\3\7\l\0\s\1\r\m\4\i\a\0\7\h\t\j\o\z\0\o\s\i\x\i\n\m\o\w\h\j\8\x\0\h\b\m\w\5\k\6\8\4\j\v\5\x\x\8\e\0\a\s\2\7\e\n\f\c\c\4\u\6\7\i\s\1\p\s\d\9\x\v\v\a\c\6\f\7\t\r\m\u\o\d\w\9\h\0\l\8\l\u\3\m\b\e\e\c\u\s\m\r\0\o\b\1\l\h\i\a\j\t\b\0\k\4\a\n\g\u\s\2\l\r\b\k\0\2\9\c\o\q\s\8\i\j\0\p\d\5\9\i\z\b\g\v\s\9\i\9\q\r\l\b\4\5\q\0\0\7\r\7\u\7\p\m\l\7\d\q\i\x\3\v\v\u\4\o\l\4\t\1\i\p\u\1\l\9\r\c\t\x\l\l\h\d\d\c\n\k\m\c\v\p\m\2\p\9\r\w\x\s\q\4\n\u\1\2\g\e\7\9\e\c\u\r\2\b\e\1\h\f\r\g\q\f\u\4\m\r\o\3\u\u\c\n\i\d\o\i\y\g\5\l\f\t\c\h\1\v\k\6\2\g\c\1\n\v\p\5\8\s\d\s\u\n\m\9\n\n\p\o\5\p\9\x\y\0\6\j\1\p\i\p\l\a\6\e\r\c\z\1\8\4\9\v\9\s\0\5\0\o\i\x\b\b\2\5\a\7\b\2\z\o\h\o\0\d\1\f\l\c\1\4\c\f\o\1\m\x\j\g\f\r\x\t\2\3\e\m\j\2\c\x\u\o\s\c\x\r\7\h\q\e\h\c\j\r\4\7\i\s\u\3\8\h\w\z\1\n\n\s\9\a\a\g\u\y\7\o\8\a\0\2\w\r\q\3\t\v\h\y\j\n\7\9\e\q\7\1\1\4\8\y\g\b\q\o\f\j\z\6\i\l\t\q\v\x\t\q\g\o\l\f\2\c\w\x\u\p\p\l\u\q\a\k\b\5\j\i\e\7\b\m\v\s\b\y\s\w\7\c\s\u\a\a\y\1\l\k\m\o\q\g\2\e\j\a\p\s\9\k\m\p\y\u\x\d\8\x\3\m\3\3\u\p\q\u\b\9\h\h\f\i\9\6\z\t\1\c\7\6\2\l\u\i\s\1\i\n\h\p\6\9\p\o\t\b\q\x\m\c\8\3\3\q\u\i\e\n\q\d\m\m\g\z\b\u\v\t\1\b\4\1\7\w\b\f\n\q\n\p\6\i\6\8\f\q\t\g\e\b\x\c\i\7\4\i\s\y\s\o\s\d\s\c\c\8\c\z\c\t\m\g\x\y\h\u\h\a\x\d\v\n\z\i\r\l\0\j\l\r\0\6\f\o\y\d\v\i\6\4\5\s\b\a\z\7\g\8\6\w\4\3\o\5\j\o\l\q\e\2\i\3\i\i\0\9\b\3\k\u\s\l\q\u\f\x\8\n\h\1\l\f\y\6\3\9\q\a\u\o\z\o\1\x\f\p\u\z\i\t\l\m\2\y\3\b\h\b\w\0\9\g\j\g\5\q\e\f\4\o\r\g\8\c\j\0\s\1\n\u\d\o\9\l\9\6\6\w\q\7\n\4\r\d\d\q\x\y\g\m\7\8\i\7\x\q\4\e\m\s\i\p\a\i\o\w\m\6\3\u\l\6\v\u\r\r\x\y\7\b\o\a\c\0\q\z\v\4\d\6\i\3\0\d\0\m\r\u\b\v\t\5\e\t\h\3\a\6\z\r\n\3\g\y\f\m\m\p\j\e\f\j\9\f\t\u\3\g\t\b\6\b\i\a\p\7\o\y\z\0\j\x\8\d\e\z\t\c\2\o\k\m\3\6\8\d\g\b\z\y\p\e\4\9\d\e\l\f\u\p\p\w\k\z\v\e\6\w\l\b\g\n\g\n\c\c\m\c\8\h\9\c\q\9\g\n\c\c\4\p\r\a\5\n\x\j\m\u\7\q\x\b\m\r\t\n\e\0\f\0\g\5\6\w\b\y\0\t\k\l\6\6\s\x\o\i\u\8\0\3\m\d\4\2\5\p\5\w\u\o\h\l\g\9\3\a\r\d\t\j\u\1\q\9\b\8\1\s\v\5\m\9\u\s\d\o\g\i\8\m\s\q\8\i\4\c\8\z\s\q\l\7\d\4\a\d\h\t\t\c\i\f\w\u\5\8\p\s\5\0\7\w\a\5\v\h\b\j\l\e\x\5\m\j\x\j\6\z\7\z\h\e\r\r\s\8\5\m\a\s\1\0\n\u\i\9\c\o\x\h\6\r\e\q\g\e\e\k\h\c\9\2\g\d\d\d\g\4\9\h\u\u\2\i\2\e\m\x\5\c\v\9\i\7\d\h\d\v\u\q\d\u\z\z\o\m\x\s\8\g\h\g\z\x\u\f\f\1\b\9\p\h\4\6\7\t\h\a\a\m\k\9\6\h\p\j\g\r\s\i\g\2\f\p\a\w\s\8\x\a\g\n\5\w\f\1\w\u\1\6\o\f\b\i\t\r\x\k\m\4\c\1\j\r\g\4\r\v\r\9\1\l\w\6\4\b\x\1\w\g\r\h\g\0\l\1\y\n\1\z\i\o\d\j\m\c\z\w\f\8\8\t\h\d\5\j\5\e\q\e\y\m\m\k\r\y\2\f\8\2\g\9\0\a\2\0\7\w\1\i\v\z\p\f\f\q\g\i\6\q\o\n\o\8\h\p\j\7\b\v\1\z\p\p\2\n\0\w\y\k\p\v\0\j\y\e\i\a\c\d\j\w\d\e\t\c\u\h\p\8\5\x\l\y\y\u\q\q\3\2\q\a\u\k\z\y\k\i\p\5\s\9\u\w\9\q\u\s\e\h\b\n\o\5\9\u\5\t\e\a\u\4\y\7\3\u\d\i\x\s\4\m\p\f\n\k\9\7\f\j\s\m\8\x\m\l\w\9\n\0\4\g\p\r\7\z\2\4\f\m\d\q\c\6\k\d\h\3\2\5\i\a\p\b\o\c\y\d\q\5\m\s\1\f\4\p\c\h\x\e\r\s\9\h\h\t\0\b\g\h\q\t\4\u\1\s\8\f\p\n\p\h\5\5\h\m\2\5\7\6\p\o\u\l\a\c\5\r\p\0\p\v\5\j\6\m\8\v\h\6\6\j\8\g\p\r\q\d\a\9\9\u\t\x\o\n\7\j\4\w\b\y\f\j\5\3\o\o\w\2\7\b\l\w\a\a\h\4\v\v\g\n\j\p\6\p\n\e\n\i\i\q\m\u\m\o\j\v\c\f\v\a\p\n\l\r\0\g\0\b\m\h\6\w\o\3\l\m\k\8\l\t\t\p\y\r\b\p\j\q\e\x\m\m\w\h\5\a\e\n\r\8\6\g\b\p\u\m\q\w\v\q\d\k\n\i\f\2\s\k\k\m\5\c\t\o\k\u\8\4\x\9\a\m\r\e\b\f\c\2\5\j\e\1\7\o\6\9\f\4\s\f\t\2\t\s\r\3\m\w\4\q\h\u\4\k\o\9\x\5\9\u\2\q\w\m\1\2\g\1\h\5\5\x\6\a\z\j\l\o\k\p\j\u\n\8\1\3\p\i\2\7\1\i\w\e\w\y\8\p\d\w\m\x\g\q\g\o\h\r\j\r\p\w\s\l\5\e\l\4\i\r\8\k\4\e\h\o\h\o\g\0\3\4\c\f\6\k\s\o\3\a\0\y\j\h\2\y\m\r\g\h\9\0\d\b\s\i\z\4\d\o\u\p\o\u\3\4\k\t\s\1\z\9\8\3\8\x\5\i\h\1\9\d\v\3\o\s\m\5\i\p\e\9\w\a\b\2\j\5\y\z\w\1\0\x\7\1\h\3\o\q\u\w\b\h\8\9\6\w\u\k\l\j\c\p\k\r\g\k\u\5\3\4\6\a\1\7\g\2\0\6\g\z\l\e\j\8\z\y\a\m\q\1\o\7\h\q\z\i\j\f\b\f\x\t\1\v\c\8\t\j\n\b\v\k\n\k\y\7\u\o\5\g\r\v\e\y\t\f\k\g\h\n\e\w\2\i\g\k\h\9\r\1\t\1\r\0\n\e\m\v\9\b\n\9\0\7\6\x\w\x\g\h\m\4\f\x\f\v\t\k\6\9\y\b\r\j\b\c\b\m\g\j\b\a\d\b\a\y\0\m\x\z\i\k\h\u\n\1\h\r\k\b\8\r\6\q\0\k\7\8\i\7\e\0\1\h\e\l\k\f\7\4\7\z\i\o\p\o\o\3\g\o\j\c\v\r\u\q\l\q\n\h\s\g\x\b\h\6\u\x\c\x\8\7\z\8\0\m\1\1\3\7\u\6\x\4\u\8\q\v\y\m\7\8\9\z\h\s\q\a\5\6\w\l\u\v\e\n\7\l\r\2\4\g\8\8\e\c\t\0\j\5\9\s\w\r\l\5\2\3\z\s\o\7\b\x\n\e\x\n\m\w\c\x\8\b\t\2\1\d\g\l\2\s\i\d\t\g\u\6\e\i\c\h\6\y\f\c\q\0\w\r\9\g\t\2\l\d\q\3\n\z\q\6\i\m\9\e\k\g\z\w\u\6\n\u\o\o\x\e\0\s\s\j\b\t\9\z\9\g\u\9\b\y\b\f\s\j\e\l\q\p\9\t\o\1\o\m\n\y\o\w\j\5\7\o\r\b\g\x\i\3\a\7\d\m\w\2\9\5\h\3\t\z\t\u\m\q\9\k\e\l\q\3\l\f\r\6\g\s\9\y\u\0\4\1\b\x\s\0\x\t\d\p\z\o\n\t\s\d\e\3\t\p\n\p\b\6\u\7\d\j\8\x\m\5\x\0\s\r\1\3\w\7\t\x\7\o\b\p\x\y\r\z\2\k\5\a\r\6\t\p\x\x\f\w\6\v\z\d\g\i\k\7\j\s\1\f\m\u\p\c\n\3\j\n\y\u\6\c\9\f\8\n\x\h\s\b\b\y\8\4\b\s\b\m\2\z\g\r\0\1\4\b\1\8\s\w\q\8\m\7\6\y\o\s\4\z\u\f\u\3\x\n\r\c\x\s\t\m\m\s\v\l\l\k\e\q\h\6\w\8\m\9\p\i\u\y\w\9\y\z\1\r\i\q\t\q\4\h\t\f\u\l\t\2\x\n\e\2\y\5\u\m\n\5\g\p\4\3\1\e\q\h\j\6\7\a\m\g\7\2\p\i\e\k\b\t\m\l\w\6\p\9\s\d\v\f\2\b\1\w\u\m\8\o\u\4\p\4\u\i\q\f\v\2\w\3\y\w\d\c\v\3\m\0\g\b\m\r\5\p\r\7\h\f\8\e\c\0\c\2\h\7\4\f\f\9\8\m\j\9\9\u\5\a\p\9\1\7\2\o\z\7\f\z\x\b\y\u\4\p\6\5\r\a\9\z\n\7\j\k\j\8\k\q\3\3\a\n\y\8\c\a\s\z\v\m\p\h\z\q\5\z\9\8\i\4\s\x\4\0\3\j\o\c\0\3\t\x\9\q\7\4\7\8\g\i\8\p\u\v\b\n\h\m\f\w\j\q\u\s\1\9\x\l\m\q\h\l\w\c\a\a\k\y\l\d\3\w\v\k\5\y\8\c\t\8\a\k\0\b\p\z\h\s\i\n\6\a\j\d\q\h\e\e\4\f\d\r\8\o\a\m\e\x\1\r\f\v\i\u\j\c\u\q\m\b\v\x\2\k\q\u\e\0\8\7\l\4\l\0\s\8\g\7\2\r\x\x\e\7\e\e\4\6\c\r\u\x\s\7\k\s\c\c\g\h\y\m\h\u\6\u\k\b\d\x\f\8\z\s\9\2\a\u\u\r\t\e\1\i\n\l\i\n\a\u\k\y\5\2\i\m\p\x\k\a\8\b\9\v\c\z\n\y\z\t\q\w\b\1\0\a\9\9\q\f\8\n\j\6\v\e\x\o\7\1\x\n\i\m\4\p\h\2\a\y\8\j\5\8\o\o\w\4\c\n\y\m\2\j\g\p\w\9\g\1\j\n\b\v\9\a\v\o\i\r\i\c\h\0\2\i\d\m\f\i\a\2\8\9\d\d\e\p\l\j\8\z\3\e\f\5\a\8\8\z\v\0\b\6\s\3\x\o\r\w\e\o\i\h\b\2\n\u\d\k\d\b\0\7\u\n\m\5\6\g\a\9\h\f\4\3\s\4\q\y\l\w\7\x\f\0\l\k\5\k\e\q\z\7\a\t\h\7\b\x\7\k\v\v\8\8\k\x\v\r\l\t\k\a\d\0\m\z\s\g\g\9\z\c\s\3\s\t\8\1\n\x\u\n\x\l\8\d\9\n\5\w\1\v\a\x\o\m\c\o\h\m\k\2\0\e\s\q\m\c\g\o\h\k\n\z\3\y\p\q\y\x\j\o\9\p\1\v\9\x\o\t\o\p\d\z\l\j\j\e\v\9\c\c\w\t\k\u\b\9\1\f\q\b\c\x\c\1\i\f\d\u\x\8\7\n\b\x\l\8\9\h\m\d\b\n\2\q\3\a\7\q\q\o\g\i\2\m\z\0\8\g\k\r\b\3\j\r\o\m\9\w\g\x\0\d\c\s\3\w\5\z\m\j\a\z\s\8\n\7\t\k\j\t\x\n\9\c\h\i\5\4\z\y\m\d\e\t\j\f\w\w\2\d\u\p\k\i\1\s\v\q\1\i\k\w\v\z\p\z\d\a\x\e\k\a\s\m\0\o\8\4\9\e\m\g\5\q\v\6\k\3\i\0\y\t\p\3\k\k\b\n\y\p\2\o\c\x\5\x\w\r\c\f\q\n\g\c\x\q\5\p\i\b\m\x\s ]] 00:27:46.216 00:27:46.216 real 0m3.308s 00:27:46.216 user 0m2.687s 00:27:46.216 sys 0m0.507s 00:27:46.216 10:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.216 10:41:40 -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 ************************************ 00:27:46.216 END TEST dd_rw_offset 00:27:46.216 ************************************ 00:27:46.475 10:41:40 -- dd/basic_rw.sh@1 -- # cleanup 00:27:46.475 10:41:40 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:46.475 10:41:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:46.475 10:41:40 -- dd/common.sh@11 -- # local nvme_ref= 00:27:46.475 10:41:40 -- dd/common.sh@12 -- # local size=0xffff 00:27:46.475 10:41:40 -- dd/common.sh@14 -- # local bs=1048576 00:27:46.475 10:41:40 -- dd/common.sh@15 -- # local count=1 00:27:46.475 10:41:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:46.475 10:41:40 -- dd/common.sh@18 -- # gen_conf 00:27:46.475 10:41:40 -- dd/common.sh@31 -- # xtrace_disable 00:27:46.475 10:41:40 -- common/autotest_common.sh@10 -- # set +x 00:27:46.475 [2024-07-12 10:41:40.193771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:46.475 [2024-07-12 10:41:40.194176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138152 ] 00:27:46.475 { 00:27:46.475 "subsystems": [ 00:27:46.475 { 00:27:46.475 "subsystem": "bdev", 00:27:46.475 "config": [ 00:27:46.475 { 00:27:46.475 "params": { 00:27:46.475 "trtype": "pcie", 00:27:46.475 "traddr": "0000:00:06.0", 00:27:46.475 "name": "Nvme0" 00:27:46.475 }, 00:27:46.475 "method": "bdev_nvme_attach_controller" 00:27:46.475 }, 00:27:46.475 { 00:27:46.475 "method": "bdev_wait_for_examine" 00:27:46.475 } 00:27:46.475 ] 00:27:46.475 } 00:27:46.475 ] 00:27:46.475 } 00:27:46.475 [2024-07-12 10:41:40.361557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.732 [2024-07-12 10:41:40.527308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.924  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:47.924 00:27:47.924 10:41:41 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:47.924 ************************************ 00:27:47.924 END TEST spdk_dd_basic_rw 00:27:47.924 ************************************ 00:27:47.924 00:27:47.924 real 0m39.315s 00:27:47.924 user 0m32.067s 00:27:47.924 sys 0m5.687s 00:27:47.924 10:41:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.924 10:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:47.924 10:41:41 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:47.924 10:41:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.924 10:41:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.924 10:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:47.924 ************************************ 00:27:47.924 START TEST spdk_dd_posix 00:27:47.924 ************************************ 00:27:47.924 10:41:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:48.182 * Looking for test storage... 00:27:48.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:48.182 10:41:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:48.182 10:41:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.182 10:41:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.182 10:41:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.182 10:41:41 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:48.182 10:41:41 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:48.182 10:41:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:48.182 10:41:41 -- paths/export.sh@5 -- # export PATH 00:27:48.182 10:41:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:48.182 10:41:41 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:48.182 10:41:41 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:48.182 10:41:41 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:48.182 10:41:41 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:48.182 10:41:41 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:48.182 10:41:41 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:48.182 10:41:41 -- dd/posix.sh@130 -- # tests 00:27:48.182 10:41:41 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:48.182 * First test run, using AIO 00:27:48.182 10:41:41 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:48.182 10:41:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:48.182 10:41:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:48.182 10:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:48.182 ************************************ 00:27:48.182 START TEST dd_flag_append 00:27:48.182 ************************************ 00:27:48.182 10:41:41 -- common/autotest_common.sh@1104 -- # append 00:27:48.182 10:41:41 -- dd/posix.sh@16 -- # local dump0 00:27:48.182 10:41:41 -- dd/posix.sh@17 -- # local dump1 00:27:48.182 10:41:41 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:48.182 10:41:41 -- dd/common.sh@98 -- # xtrace_disable 00:27:48.182 10:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:48.182 10:41:41 -- dd/posix.sh@19 -- # dump0=dgwo9ska7jcyv35f1c814kqeq275br6p 00:27:48.182 10:41:41 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:48.182 10:41:41 -- dd/common.sh@98 -- # xtrace_disable 00:27:48.182 10:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:48.182 10:41:41 -- dd/posix.sh@20 -- # dump1=x7qm6bpjibm6su7tsudkclw1o5te4f5j 00:27:48.182 10:41:41 -- dd/posix.sh@22 -- # printf %s dgwo9ska7jcyv35f1c814kqeq275br6p 00:27:48.182 10:41:41 -- dd/posix.sh@23 -- # printf %s x7qm6bpjibm6su7tsudkclw1o5te4f5j 00:27:48.182 10:41:41 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:48.182 [2024-07-12 10:41:41.934754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:48.182 [2024-07-12 10:41:41.935106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138221 ] 00:27:48.440 [2024-07-12 10:41:42.102779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.440 [2024-07-12 10:41:42.274653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.632  Copying: 32/32 [B] (average 31 kBps) 00:27:49.632 00:27:49.632 ************************************ 00:27:49.632 END TEST dd_flag_append 00:27:49.632 ************************************ 00:27:49.632 10:41:43 -- dd/posix.sh@27 -- # [[ x7qm6bpjibm6su7tsudkclw1o5te4f5jdgwo9ska7jcyv35f1c814kqeq275br6p == \x\7\q\m\6\b\p\j\i\b\m\6\s\u\7\t\s\u\d\k\c\l\w\1\o\5\t\e\4\f\5\j\d\g\w\o\9\s\k\a\7\j\c\y\v\3\5\f\1\c\8\1\4\k\q\e\q\2\7\5\b\r\6\p ]] 00:27:49.632 00:27:49.632 real 0m1.615s 00:27:49.632 user 0m1.232s 00:27:49.632 sys 0m0.238s 00:27:49.632 10:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:49.632 10:41:43 -- common/autotest_common.sh@10 -- # set +x 00:27:49.632 10:41:43 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:49.632 10:41:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:49.632 10:41:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:49.632 10:41:43 -- common/autotest_common.sh@10 -- # set +x 00:27:49.632 ************************************ 00:27:49.632 START TEST dd_flag_directory 00:27:49.632 ************************************ 00:27:49.632 10:41:43 -- common/autotest_common.sh@1104 -- # directory 00:27:49.632 10:41:43 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:49.632 10:41:43 -- common/autotest_common.sh@640 -- # local es=0 00:27:49.632 10:41:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:49.632 10:41:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.632 10:41:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.632 10:41:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.632 10:41:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.632 10:41:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.632 10:41:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.632 10:41:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:49.632 10:41:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:49.632 10:41:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:49.890 [2024-07-12 10:41:43.599888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:49.890 [2024-07-12 10:41:43.600281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138275 ] 00:27:49.890 [2024-07-12 10:41:43.769348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.148 [2024-07-12 10:41:43.931746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.406 [2024-07-12 10:41:44.178819] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:50.406 [2024-07-12 10:41:44.179188] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:50.406 [2024-07-12 10:41:44.179253] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:50.970 [2024-07-12 10:41:44.752872] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:51.228 10:41:45 -- common/autotest_common.sh@643 -- # es=236 00:27:51.228 10:41:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:51.228 10:41:45 -- common/autotest_common.sh@652 -- # es=108 00:27:51.228 10:41:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:51.228 10:41:45 -- common/autotest_common.sh@660 -- # es=1 00:27:51.228 10:41:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:51.228 10:41:45 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:51.228 10:41:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:51.228 10:41:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:51.228 10:41:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.228 10:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:51.228 10:41:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.228 10:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:51.228 10:41:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.228 10:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:51.228 10:41:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.228 10:41:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:51.228 10:41:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:51.493 [2024-07-12 10:41:45.142772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:51.493 [2024-07-12 10:41:45.143178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138302 ] 00:27:51.493 [2024-07-12 10:41:45.317704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.756 [2024-07-12 10:41:45.518649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.013 [2024-07-12 10:41:45.765338] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:52.013 [2024-07-12 10:41:45.765686] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:52.013 [2024-07-12 10:41:45.765747] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:52.579 [2024-07-12 10:41:46.339624] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:52.838 ************************************ 00:27:52.838 END TEST dd_flag_directory 00:27:52.838 ************************************ 00:27:52.838 10:41:46 -- common/autotest_common.sh@643 -- # es=236 00:27:52.838 10:41:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.838 10:41:46 -- common/autotest_common.sh@652 -- # es=108 00:27:52.838 10:41:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:52.838 10:41:46 -- common/autotest_common.sh@660 -- # es=1 00:27:52.838 10:41:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.838 00:27:52.838 real 0m3.134s 00:27:52.838 user 0m2.489s 00:27:52.838 sys 0m0.440s 00:27:52.838 10:41:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.838 10:41:46 -- common/autotest_common.sh@10 -- # set +x 00:27:52.838 10:41:46 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:52.838 10:41:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.838 10:41:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.838 10:41:46 -- common/autotest_common.sh@10 -- # set +x 00:27:52.838 ************************************ 00:27:52.838 START TEST dd_flag_nofollow 00:27:52.838 ************************************ 00:27:52.838 10:41:46 -- common/autotest_common.sh@1104 -- # nofollow 00:27:52.838 10:41:46 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:52.838 10:41:46 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:52.838 10:41:46 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:52.838 10:41:46 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:52.838 10:41:46 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:52.838 10:41:46 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.838 10:41:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:52.838 10:41:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.838 10:41:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.838 10:41:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.838 10:41:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.838 10:41:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.838 10:41:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.838 10:41:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.838 10:41:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.838 10:41:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:53.096 [2024-07-12 10:41:46.800490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:53.096 [2024-07-12 10:41:46.801482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138340 ] 00:27:53.096 [2024-07-12 10:41:46.969863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.353 [2024-07-12 10:41:47.148259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.611 [2024-07-12 10:41:47.398494] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:53.611 [2024-07-12 10:41:47.398727] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:53.611 [2024-07-12 10:41:47.398784] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:54.177 [2024-07-12 10:41:47.970209] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:54.435 10:41:48 -- common/autotest_common.sh@643 -- # es=216 00:27:54.435 10:41:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:54.435 10:41:48 -- common/autotest_common.sh@652 -- # es=88 00:27:54.435 10:41:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:54.435 10:41:48 -- common/autotest_common.sh@660 -- # es=1 00:27:54.435 10:41:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:54.435 10:41:48 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:54.435 10:41:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:54.435 10:41:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:54.436 10:41:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:54.436 10:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:54.436 10:41:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:54.436 10:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:54.436 10:41:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:54.436 10:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:54.436 10:41:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:54.436 10:41:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:54.436 10:41:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:54.694 [2024-07-12 10:41:48.367510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:54.694 [2024-07-12 10:41:48.367920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138368 ] 00:27:54.694 [2024-07-12 10:41:48.536927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.952 [2024-07-12 10:41:48.698552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.210 [2024-07-12 10:41:48.945206] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:55.210 [2024-07-12 10:41:48.945571] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:55.210 [2024-07-12 10:41:48.945633] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:55.775 [2024-07-12 10:41:49.518315] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:56.032 10:41:49 -- common/autotest_common.sh@643 -- # es=216 00:27:56.032 10:41:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:56.032 10:41:49 -- common/autotest_common.sh@652 -- # es=88 00:27:56.032 10:41:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:56.032 10:41:49 -- common/autotest_common.sh@660 -- # es=1 00:27:56.032 10:41:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:56.033 10:41:49 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:56.033 10:41:49 -- dd/common.sh@98 -- # xtrace_disable 00:27:56.033 10:41:49 -- common/autotest_common.sh@10 -- # set +x 00:27:56.033 10:41:49 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:56.033 [2024-07-12 10:41:49.907027] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:56.033 [2024-07-12 10:41:49.907387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138407 ] 00:27:56.290 [2024-07-12 10:41:50.058524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.549 [2024-07-12 10:41:50.220553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.741  Copying: 512/512 [B] (average 500 kBps) 00:27:57.741 00:27:57.741 ************************************ 00:27:57.741 END TEST dd_flag_nofollow 00:27:57.741 ************************************ 00:27:57.741 10:41:51 -- dd/posix.sh@49 -- # [[ p8vbk48pjw1kuuuq5z8wex8b832xr82m4y9v7xe5dvyj1rpek422g12ozebp3nqqeasunbutebw6dylt85yf7jw1p114i90c5573jlvx5jmnsfe5fr564mr8yeu2q0xq3xieupis79hzlq0ut764ul4w5ozjk6jjoogm29e20hrznrd6ywzmq8d4g6opgnqd994gucpjzq712r1zz2w4rzrqpiyllrn0qs2b925jw64jxumqn45oq7ts5pbpa721skjamwafoznvjjp4kizrjler7eh9zoqjkuz3rsokt0pd60t4e9ysmthte14lyhyvvjotxqqk4ia0byslpgpvtxlqnias76tnunnh8jzpsmbkoaqepwczbzk1f5i5crmt8l2kl96zcp2wfai6ikqrcgfx4jep5mnl90n23mfki70gzlv4hxo2yqzu7b27fe1kjgb9cy45iz9wy6qxlwtic20dgyu0khzjua8i4mo0xid8o4icj49klfnget7ykgm2 == \p\8\v\b\k\4\8\p\j\w\1\k\u\u\u\q\5\z\8\w\e\x\8\b\8\3\2\x\r\8\2\m\4\y\9\v\7\x\e\5\d\v\y\j\1\r\p\e\k\4\2\2\g\1\2\o\z\e\b\p\3\n\q\q\e\a\s\u\n\b\u\t\e\b\w\6\d\y\l\t\8\5\y\f\7\j\w\1\p\1\1\4\i\9\0\c\5\5\7\3\j\l\v\x\5\j\m\n\s\f\e\5\f\r\5\6\4\m\r\8\y\e\u\2\q\0\x\q\3\x\i\e\u\p\i\s\7\9\h\z\l\q\0\u\t\7\6\4\u\l\4\w\5\o\z\j\k\6\j\j\o\o\g\m\2\9\e\2\0\h\r\z\n\r\d\6\y\w\z\m\q\8\d\4\g\6\o\p\g\n\q\d\9\9\4\g\u\c\p\j\z\q\7\1\2\r\1\z\z\2\w\4\r\z\r\q\p\i\y\l\l\r\n\0\q\s\2\b\9\2\5\j\w\6\4\j\x\u\m\q\n\4\5\o\q\7\t\s\5\p\b\p\a\7\2\1\s\k\j\a\m\w\a\f\o\z\n\v\j\j\p\4\k\i\z\r\j\l\e\r\7\e\h\9\z\o\q\j\k\u\z\3\r\s\o\k\t\0\p\d\6\0\t\4\e\9\y\s\m\t\h\t\e\1\4\l\y\h\y\v\v\j\o\t\x\q\q\k\4\i\a\0\b\y\s\l\p\g\p\v\t\x\l\q\n\i\a\s\7\6\t\n\u\n\n\h\8\j\z\p\s\m\b\k\o\a\q\e\p\w\c\z\b\z\k\1\f\5\i\5\c\r\m\t\8\l\2\k\l\9\6\z\c\p\2\w\f\a\i\6\i\k\q\r\c\g\f\x\4\j\e\p\5\m\n\l\9\0\n\2\3\m\f\k\i\7\0\g\z\l\v\4\h\x\o\2\y\q\z\u\7\b\2\7\f\e\1\k\j\g\b\9\c\y\4\5\i\z\9\w\y\6\q\x\l\w\t\i\c\2\0\d\g\y\u\0\k\h\z\j\u\a\8\i\4\m\o\0\x\i\d\8\o\4\i\c\j\4\9\k\l\f\n\g\e\t\7\y\k\g\m\2 ]] 00:27:57.741 00:27:57.741 real 0m4.706s 00:27:57.741 user 0m3.706s 00:27:57.741 sys 0m0.644s 00:27:57.741 10:41:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.741 10:41:51 -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 10:41:51 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:57.741 10:41:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.741 10:41:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.741 10:41:51 -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 ************************************ 00:27:57.741 START TEST dd_flag_noatime 00:27:57.741 ************************************ 00:27:57.741 10:41:51 -- common/autotest_common.sh@1104 -- # noatime 00:27:57.741 10:41:51 -- dd/posix.sh@53 -- # local atime_if 00:27:57.741 10:41:51 -- dd/posix.sh@54 -- # local atime_of 00:27:57.741 10:41:51 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:57.741 10:41:51 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.741 10:41:51 -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 10:41:51 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:57.741 10:41:51 -- dd/posix.sh@60 -- # atime_if=1720780910 00:27:57.741 10:41:51 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:57.741 10:41:51 -- dd/posix.sh@61 -- # atime_of=1720780911 00:27:57.741 10:41:51 -- dd/posix.sh@66 -- # sleep 1 00:27:58.677 10:41:52 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:58.677 [2024-07-12 10:41:52.575923] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:58.677 [2024-07-12 10:41:52.576130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138471 ] 00:27:58.936 [2024-07-12 10:41:52.745393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.194 [2024-07-12 10:41:52.916264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.425  Copying: 512/512 [B] (average 500 kBps) 00:28:00.425 00:28:00.425 10:41:54 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:00.425 10:41:54 -- dd/posix.sh@69 -- # (( atime_if == 1720780910 )) 00:28:00.425 10:41:54 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.425 10:41:54 -- dd/posix.sh@70 -- # (( atime_of == 1720780911 )) 00:28:00.425 10:41:54 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.425 [2024-07-12 10:41:54.174556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:00.425 [2024-07-12 10:41:54.174763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138498 ] 00:28:00.710 [2024-07-12 10:41:54.341931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.710 [2024-07-12 10:41:54.511572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.916  Copying: 512/512 [B] (average 500 kBps) 00:28:01.916 00:28:01.916 10:41:55 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:01.916 10:41:55 -- dd/posix.sh@73 -- # (( atime_if < 1720780914 )) 00:28:01.916 ************************************ 00:28:01.916 END TEST dd_flag_noatime 00:28:01.916 ************************************ 00:28:01.916 00:28:01.916 real 0m4.210s 00:28:01.916 user 0m2.475s 00:28:01.916 sys 0m0.475s 00:28:01.916 10:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:01.916 10:41:55 -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 10:41:55 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:28:01.916 10:41:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:01.916 10:41:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:01.916 10:41:55 -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 ************************************ 00:28:01.916 START TEST dd_flags_misc 00:28:01.916 ************************************ 00:28:01.916 10:41:55 -- common/autotest_common.sh@1104 -- # io 00:28:01.916 10:41:55 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:01.916 10:41:55 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:01.916 10:41:55 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:01.916 10:41:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:01.916 10:41:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:01.916 10:41:55 -- dd/common.sh@98 -- # xtrace_disable 00:28:01.916 10:41:55 -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 10:41:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:01.916 10:41:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:02.175 [2024-07-12 10:41:55.829539] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:02.175 [2024-07-12 10:41:55.829907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138534 ] 00:28:02.175 [2024-07-12 10:41:55.998133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.434 [2024-07-12 10:41:56.191912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.628  Copying: 512/512 [B] (average 500 kBps) 00:28:03.628 00:28:03.628 10:41:57 -- dd/posix.sh@93 -- # [[ pffwdad5hmioihw53559x5bo55yz81my4an813n1ip1bo92vrmn0q8kdcm8bi9khwdz7l8fjcrui702avbslt2xho3et4anns10t44ctyisjvgl8jc5kuenw2ap9n9l9up3b15y4xjdwwwkfvfl1f1d1meelscfubryz61mw9za3wx2z559ymyos49k21i56fxxpnh3svzpm5abu91su4mq231ssrsbbz6ijsmfx9axw3pl8oljc57trzdoanvq4k8eyypgkw84j26ioqv7y9v2mszqwql5nrrazjdln67eibao8yljlp05jzxbm5pqqmzo0t7regdb2z8x1n1wklwrkh8k1zex6yfipa4l6av8hzkpg13ecxfn82x96cv8qpu0lxpskhrmtnjyf5prhgi9xl8lymlu8w57ntabns2v7eje7j7n3dycl8wi2cvrzf8xrgfu9z9op8i29v8lxpnl86aj48wdx9ris2zami7trnz2abmolfyk5a0idm8c6 == \p\f\f\w\d\a\d\5\h\m\i\o\i\h\w\5\3\5\5\9\x\5\b\o\5\5\y\z\8\1\m\y\4\a\n\8\1\3\n\1\i\p\1\b\o\9\2\v\r\m\n\0\q\8\k\d\c\m\8\b\i\9\k\h\w\d\z\7\l\8\f\j\c\r\u\i\7\0\2\a\v\b\s\l\t\2\x\h\o\3\e\t\4\a\n\n\s\1\0\t\4\4\c\t\y\i\s\j\v\g\l\8\j\c\5\k\u\e\n\w\2\a\p\9\n\9\l\9\u\p\3\b\1\5\y\4\x\j\d\w\w\w\k\f\v\f\l\1\f\1\d\1\m\e\e\l\s\c\f\u\b\r\y\z\6\1\m\w\9\z\a\3\w\x\2\z\5\5\9\y\m\y\o\s\4\9\k\2\1\i\5\6\f\x\x\p\n\h\3\s\v\z\p\m\5\a\b\u\9\1\s\u\4\m\q\2\3\1\s\s\r\s\b\b\z\6\i\j\s\m\f\x\9\a\x\w\3\p\l\8\o\l\j\c\5\7\t\r\z\d\o\a\n\v\q\4\k\8\e\y\y\p\g\k\w\8\4\j\2\6\i\o\q\v\7\y\9\v\2\m\s\z\q\w\q\l\5\n\r\r\a\z\j\d\l\n\6\7\e\i\b\a\o\8\y\l\j\l\p\0\5\j\z\x\b\m\5\p\q\q\m\z\o\0\t\7\r\e\g\d\b\2\z\8\x\1\n\1\w\k\l\w\r\k\h\8\k\1\z\e\x\6\y\f\i\p\a\4\l\6\a\v\8\h\z\k\p\g\1\3\e\c\x\f\n\8\2\x\9\6\c\v\8\q\p\u\0\l\x\p\s\k\h\r\m\t\n\j\y\f\5\p\r\h\g\i\9\x\l\8\l\y\m\l\u\8\w\5\7\n\t\a\b\n\s\2\v\7\e\j\e\7\j\7\n\3\d\y\c\l\8\w\i\2\c\v\r\z\f\8\x\r\g\f\u\9\z\9\o\p\8\i\2\9\v\8\l\x\p\n\l\8\6\a\j\4\8\w\d\x\9\r\i\s\2\z\a\m\i\7\t\r\n\z\2\a\b\m\o\l\f\y\k\5\a\0\i\d\m\8\c\6 ]] 00:28:03.628 10:41:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:03.628 10:41:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:03.628 [2024-07-12 10:41:57.445688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:03.628 [2024-07-12 10:41:57.446227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138566 ] 00:28:03.887 [2024-07-12 10:41:57.613623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.887 [2024-07-12 10:41:57.771227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.081  Copying: 512/512 [B] (average 500 kBps) 00:28:05.081 00:28:05.082 10:41:58 -- dd/posix.sh@93 -- # [[ pffwdad5hmioihw53559x5bo55yz81my4an813n1ip1bo92vrmn0q8kdcm8bi9khwdz7l8fjcrui702avbslt2xho3et4anns10t44ctyisjvgl8jc5kuenw2ap9n9l9up3b15y4xjdwwwkfvfl1f1d1meelscfubryz61mw9za3wx2z559ymyos49k21i56fxxpnh3svzpm5abu91su4mq231ssrsbbz6ijsmfx9axw3pl8oljc57trzdoanvq4k8eyypgkw84j26ioqv7y9v2mszqwql5nrrazjdln67eibao8yljlp05jzxbm5pqqmzo0t7regdb2z8x1n1wklwrkh8k1zex6yfipa4l6av8hzkpg13ecxfn82x96cv8qpu0lxpskhrmtnjyf5prhgi9xl8lymlu8w57ntabns2v7eje7j7n3dycl8wi2cvrzf8xrgfu9z9op8i29v8lxpnl86aj48wdx9ris2zami7trnz2abmolfyk5a0idm8c6 == \p\f\f\w\d\a\d\5\h\m\i\o\i\h\w\5\3\5\5\9\x\5\b\o\5\5\y\z\8\1\m\y\4\a\n\8\1\3\n\1\i\p\1\b\o\9\2\v\r\m\n\0\q\8\k\d\c\m\8\b\i\9\k\h\w\d\z\7\l\8\f\j\c\r\u\i\7\0\2\a\v\b\s\l\t\2\x\h\o\3\e\t\4\a\n\n\s\1\0\t\4\4\c\t\y\i\s\j\v\g\l\8\j\c\5\k\u\e\n\w\2\a\p\9\n\9\l\9\u\p\3\b\1\5\y\4\x\j\d\w\w\w\k\f\v\f\l\1\f\1\d\1\m\e\e\l\s\c\f\u\b\r\y\z\6\1\m\w\9\z\a\3\w\x\2\z\5\5\9\y\m\y\o\s\4\9\k\2\1\i\5\6\f\x\x\p\n\h\3\s\v\z\p\m\5\a\b\u\9\1\s\u\4\m\q\2\3\1\s\s\r\s\b\b\z\6\i\j\s\m\f\x\9\a\x\w\3\p\l\8\o\l\j\c\5\7\t\r\z\d\o\a\n\v\q\4\k\8\e\y\y\p\g\k\w\8\4\j\2\6\i\o\q\v\7\y\9\v\2\m\s\z\q\w\q\l\5\n\r\r\a\z\j\d\l\n\6\7\e\i\b\a\o\8\y\l\j\l\p\0\5\j\z\x\b\m\5\p\q\q\m\z\o\0\t\7\r\e\g\d\b\2\z\8\x\1\n\1\w\k\l\w\r\k\h\8\k\1\z\e\x\6\y\f\i\p\a\4\l\6\a\v\8\h\z\k\p\g\1\3\e\c\x\f\n\8\2\x\9\6\c\v\8\q\p\u\0\l\x\p\s\k\h\r\m\t\n\j\y\f\5\p\r\h\g\i\9\x\l\8\l\y\m\l\u\8\w\5\7\n\t\a\b\n\s\2\v\7\e\j\e\7\j\7\n\3\d\y\c\l\8\w\i\2\c\v\r\z\f\8\x\r\g\f\u\9\z\9\o\p\8\i\2\9\v\8\l\x\p\n\l\8\6\a\j\4\8\w\d\x\9\r\i\s\2\z\a\m\i\7\t\r\n\z\2\a\b\m\o\l\f\y\k\5\a\0\i\d\m\8\c\6 ]] 00:28:05.082 10:41:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:05.082 10:41:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:05.341 [2024-07-12 10:41:59.031264] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:05.341 [2024-07-12 10:41:59.032399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138584 ] 00:28:05.341 [2024-07-12 10:41:59.201911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.600 [2024-07-12 10:41:59.374699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.794  Copying: 512/512 [B] (average 250 kBps) 00:28:06.794 00:28:06.794 10:42:00 -- dd/posix.sh@93 -- # [[ pffwdad5hmioihw53559x5bo55yz81my4an813n1ip1bo92vrmn0q8kdcm8bi9khwdz7l8fjcrui702avbslt2xho3et4anns10t44ctyisjvgl8jc5kuenw2ap9n9l9up3b15y4xjdwwwkfvfl1f1d1meelscfubryz61mw9za3wx2z559ymyos49k21i56fxxpnh3svzpm5abu91su4mq231ssrsbbz6ijsmfx9axw3pl8oljc57trzdoanvq4k8eyypgkw84j26ioqv7y9v2mszqwql5nrrazjdln67eibao8yljlp05jzxbm5pqqmzo0t7regdb2z8x1n1wklwrkh8k1zex6yfipa4l6av8hzkpg13ecxfn82x96cv8qpu0lxpskhrmtnjyf5prhgi9xl8lymlu8w57ntabns2v7eje7j7n3dycl8wi2cvrzf8xrgfu9z9op8i29v8lxpnl86aj48wdx9ris2zami7trnz2abmolfyk5a0idm8c6 == \p\f\f\w\d\a\d\5\h\m\i\o\i\h\w\5\3\5\5\9\x\5\b\o\5\5\y\z\8\1\m\y\4\a\n\8\1\3\n\1\i\p\1\b\o\9\2\v\r\m\n\0\q\8\k\d\c\m\8\b\i\9\k\h\w\d\z\7\l\8\f\j\c\r\u\i\7\0\2\a\v\b\s\l\t\2\x\h\o\3\e\t\4\a\n\n\s\1\0\t\4\4\c\t\y\i\s\j\v\g\l\8\j\c\5\k\u\e\n\w\2\a\p\9\n\9\l\9\u\p\3\b\1\5\y\4\x\j\d\w\w\w\k\f\v\f\l\1\f\1\d\1\m\e\e\l\s\c\f\u\b\r\y\z\6\1\m\w\9\z\a\3\w\x\2\z\5\5\9\y\m\y\o\s\4\9\k\2\1\i\5\6\f\x\x\p\n\h\3\s\v\z\p\m\5\a\b\u\9\1\s\u\4\m\q\2\3\1\s\s\r\s\b\b\z\6\i\j\s\m\f\x\9\a\x\w\3\p\l\8\o\l\j\c\5\7\t\r\z\d\o\a\n\v\q\4\k\8\e\y\y\p\g\k\w\8\4\j\2\6\i\o\q\v\7\y\9\v\2\m\s\z\q\w\q\l\5\n\r\r\a\z\j\d\l\n\6\7\e\i\b\a\o\8\y\l\j\l\p\0\5\j\z\x\b\m\5\p\q\q\m\z\o\0\t\7\r\e\g\d\b\2\z\8\x\1\n\1\w\k\l\w\r\k\h\8\k\1\z\e\x\6\y\f\i\p\a\4\l\6\a\v\8\h\z\k\p\g\1\3\e\c\x\f\n\8\2\x\9\6\c\v\8\q\p\u\0\l\x\p\s\k\h\r\m\t\n\j\y\f\5\p\r\h\g\i\9\x\l\8\l\y\m\l\u\8\w\5\7\n\t\a\b\n\s\2\v\7\e\j\e\7\j\7\n\3\d\y\c\l\8\w\i\2\c\v\r\z\f\8\x\r\g\f\u\9\z\9\o\p\8\i\2\9\v\8\l\x\p\n\l\8\6\a\j\4\8\w\d\x\9\r\i\s\2\z\a\m\i\7\t\r\n\z\2\a\b\m\o\l\f\y\k\5\a\0\i\d\m\8\c\6 ]] 00:28:06.794 10:42:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:06.794 10:42:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:06.794 [2024-07-12 10:42:00.641152] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:06.794 [2024-07-12 10:42:00.641304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138628 ] 00:28:07.052 [2024-07-12 10:42:00.793445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.052 [2024-07-12 10:42:00.954852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.246  Copying: 512/512 [B] (average 71 kBps) 00:28:08.246 00:28:08.246 10:42:02 -- dd/posix.sh@93 -- # [[ pffwdad5hmioihw53559x5bo55yz81my4an813n1ip1bo92vrmn0q8kdcm8bi9khwdz7l8fjcrui702avbslt2xho3et4anns10t44ctyisjvgl8jc5kuenw2ap9n9l9up3b15y4xjdwwwkfvfl1f1d1meelscfubryz61mw9za3wx2z559ymyos49k21i56fxxpnh3svzpm5abu91su4mq231ssrsbbz6ijsmfx9axw3pl8oljc57trzdoanvq4k8eyypgkw84j26ioqv7y9v2mszqwql5nrrazjdln67eibao8yljlp05jzxbm5pqqmzo0t7regdb2z8x1n1wklwrkh8k1zex6yfipa4l6av8hzkpg13ecxfn82x96cv8qpu0lxpskhrmtnjyf5prhgi9xl8lymlu8w57ntabns2v7eje7j7n3dycl8wi2cvrzf8xrgfu9z9op8i29v8lxpnl86aj48wdx9ris2zami7trnz2abmolfyk5a0idm8c6 == \p\f\f\w\d\a\d\5\h\m\i\o\i\h\w\5\3\5\5\9\x\5\b\o\5\5\y\z\8\1\m\y\4\a\n\8\1\3\n\1\i\p\1\b\o\9\2\v\r\m\n\0\q\8\k\d\c\m\8\b\i\9\k\h\w\d\z\7\l\8\f\j\c\r\u\i\7\0\2\a\v\b\s\l\t\2\x\h\o\3\e\t\4\a\n\n\s\1\0\t\4\4\c\t\y\i\s\j\v\g\l\8\j\c\5\k\u\e\n\w\2\a\p\9\n\9\l\9\u\p\3\b\1\5\y\4\x\j\d\w\w\w\k\f\v\f\l\1\f\1\d\1\m\e\e\l\s\c\f\u\b\r\y\z\6\1\m\w\9\z\a\3\w\x\2\z\5\5\9\y\m\y\o\s\4\9\k\2\1\i\5\6\f\x\x\p\n\h\3\s\v\z\p\m\5\a\b\u\9\1\s\u\4\m\q\2\3\1\s\s\r\s\b\b\z\6\i\j\s\m\f\x\9\a\x\w\3\p\l\8\o\l\j\c\5\7\t\r\z\d\o\a\n\v\q\4\k\8\e\y\y\p\g\k\w\8\4\j\2\6\i\o\q\v\7\y\9\v\2\m\s\z\q\w\q\l\5\n\r\r\a\z\j\d\l\n\6\7\e\i\b\a\o\8\y\l\j\l\p\0\5\j\z\x\b\m\5\p\q\q\m\z\o\0\t\7\r\e\g\d\b\2\z\8\x\1\n\1\w\k\l\w\r\k\h\8\k\1\z\e\x\6\y\f\i\p\a\4\l\6\a\v\8\h\z\k\p\g\1\3\e\c\x\f\n\8\2\x\9\6\c\v\8\q\p\u\0\l\x\p\s\k\h\r\m\t\n\j\y\f\5\p\r\h\g\i\9\x\l\8\l\y\m\l\u\8\w\5\7\n\t\a\b\n\s\2\v\7\e\j\e\7\j\7\n\3\d\y\c\l\8\w\i\2\c\v\r\z\f\8\x\r\g\f\u\9\z\9\o\p\8\i\2\9\v\8\l\x\p\n\l\8\6\a\j\4\8\w\d\x\9\r\i\s\2\z\a\m\i\7\t\r\n\z\2\a\b\m\o\l\f\y\k\5\a\0\i\d\m\8\c\6 ]] 00:28:08.246 10:42:02 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:08.246 10:42:02 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:08.246 10:42:02 -- dd/common.sh@98 -- # xtrace_disable 00:28:08.246 10:42:02 -- common/autotest_common.sh@10 -- # set +x 00:28:08.505 10:42:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:08.505 10:42:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:08.505 [2024-07-12 10:42:02.200579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:08.505 [2024-07-12 10:42:02.200729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138645 ] 00:28:08.505 [2024-07-12 10:42:02.353969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.765 [2024-07-12 10:42:02.534177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.957  Copying: 512/512 [B] (average 500 kBps) 00:28:09.957 00:28:09.957 10:42:03 -- dd/posix.sh@93 -- # [[ x5kdah20jxzl90z59ucaegygyc7z4upn6m26ed12v1mio2ahebd4a70l0oqo9vww7bsskfj9g8r3abhkea3rbndg6x0r84sb9ant053u3s78ljcc1sipes18ljyewa3gqlkpurd3wuxyoxdg45jbzfiw25laixw26j1779gjj2nuryqkte91uhl48s4pzquvf10wukxtvc7k6bkoll34k8hug14de3j0lrmltz2re32uq7827q6wl8zzppsi2khspfb1orwcclxdxpk0k8qtyxhtlttzmzfnrxy0j1mbdzit3gs4x2ounyuhqe9svkh9pj61ixfo2gjtyqg4c2714ohp07h16ts3lc6zzutbuoh78a3l51upmn7ese1swio5254jb8398gxuefivrys9vaq283waxo07nx250aulh1fkiipkn1i3yc2chtn78ds1zgq32fu6qrnsjen4rd9bkuu73jtcsayvtw2yd5oyz94ew416w7mt2lxfk0fv692o == \x\5\k\d\a\h\2\0\j\x\z\l\9\0\z\5\9\u\c\a\e\g\y\g\y\c\7\z\4\u\p\n\6\m\2\6\e\d\1\2\v\1\m\i\o\2\a\h\e\b\d\4\a\7\0\l\0\o\q\o\9\v\w\w\7\b\s\s\k\f\j\9\g\8\r\3\a\b\h\k\e\a\3\r\b\n\d\g\6\x\0\r\8\4\s\b\9\a\n\t\0\5\3\u\3\s\7\8\l\j\c\c\1\s\i\p\e\s\1\8\l\j\y\e\w\a\3\g\q\l\k\p\u\r\d\3\w\u\x\y\o\x\d\g\4\5\j\b\z\f\i\w\2\5\l\a\i\x\w\2\6\j\1\7\7\9\g\j\j\2\n\u\r\y\q\k\t\e\9\1\u\h\l\4\8\s\4\p\z\q\u\v\f\1\0\w\u\k\x\t\v\c\7\k\6\b\k\o\l\l\3\4\k\8\h\u\g\1\4\d\e\3\j\0\l\r\m\l\t\z\2\r\e\3\2\u\q\7\8\2\7\q\6\w\l\8\z\z\p\p\s\i\2\k\h\s\p\f\b\1\o\r\w\c\c\l\x\d\x\p\k\0\k\8\q\t\y\x\h\t\l\t\t\z\m\z\f\n\r\x\y\0\j\1\m\b\d\z\i\t\3\g\s\4\x\2\o\u\n\y\u\h\q\e\9\s\v\k\h\9\p\j\6\1\i\x\f\o\2\g\j\t\y\q\g\4\c\2\7\1\4\o\h\p\0\7\h\1\6\t\s\3\l\c\6\z\z\u\t\b\u\o\h\7\8\a\3\l\5\1\u\p\m\n\7\e\s\e\1\s\w\i\o\5\2\5\4\j\b\8\3\9\8\g\x\u\e\f\i\v\r\y\s\9\v\a\q\2\8\3\w\a\x\o\0\7\n\x\2\5\0\a\u\l\h\1\f\k\i\i\p\k\n\1\i\3\y\c\2\c\h\t\n\7\8\d\s\1\z\g\q\3\2\f\u\6\q\r\n\s\j\e\n\4\r\d\9\b\k\u\u\7\3\j\t\c\s\a\y\v\t\w\2\y\d\5\o\y\z\9\4\e\w\4\1\6\w\7\m\t\2\l\x\f\k\0\f\v\6\9\2\o ]] 00:28:09.957 10:42:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:09.957 10:42:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:09.957 [2024-07-12 10:42:03.786950] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:09.957 [2024-07-12 10:42:03.787942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138673 ] 00:28:10.215 [2024-07-12 10:42:03.953443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.215 [2024-07-12 10:42:04.124630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.407  Copying: 512/512 [B] (average 500 kBps) 00:28:11.407 00:28:11.408 10:42:05 -- dd/posix.sh@93 -- # [[ x5kdah20jxzl90z59ucaegygyc7z4upn6m26ed12v1mio2ahebd4a70l0oqo9vww7bsskfj9g8r3abhkea3rbndg6x0r84sb9ant053u3s78ljcc1sipes18ljyewa3gqlkpurd3wuxyoxdg45jbzfiw25laixw26j1779gjj2nuryqkte91uhl48s4pzquvf10wukxtvc7k6bkoll34k8hug14de3j0lrmltz2re32uq7827q6wl8zzppsi2khspfb1orwcclxdxpk0k8qtyxhtlttzmzfnrxy0j1mbdzit3gs4x2ounyuhqe9svkh9pj61ixfo2gjtyqg4c2714ohp07h16ts3lc6zzutbuoh78a3l51upmn7ese1swio5254jb8398gxuefivrys9vaq283waxo07nx250aulh1fkiipkn1i3yc2chtn78ds1zgq32fu6qrnsjen4rd9bkuu73jtcsayvtw2yd5oyz94ew416w7mt2lxfk0fv692o == \x\5\k\d\a\h\2\0\j\x\z\l\9\0\z\5\9\u\c\a\e\g\y\g\y\c\7\z\4\u\p\n\6\m\2\6\e\d\1\2\v\1\m\i\o\2\a\h\e\b\d\4\a\7\0\l\0\o\q\o\9\v\w\w\7\b\s\s\k\f\j\9\g\8\r\3\a\b\h\k\e\a\3\r\b\n\d\g\6\x\0\r\8\4\s\b\9\a\n\t\0\5\3\u\3\s\7\8\l\j\c\c\1\s\i\p\e\s\1\8\l\j\y\e\w\a\3\g\q\l\k\p\u\r\d\3\w\u\x\y\o\x\d\g\4\5\j\b\z\f\i\w\2\5\l\a\i\x\w\2\6\j\1\7\7\9\g\j\j\2\n\u\r\y\q\k\t\e\9\1\u\h\l\4\8\s\4\p\z\q\u\v\f\1\0\w\u\k\x\t\v\c\7\k\6\b\k\o\l\l\3\4\k\8\h\u\g\1\4\d\e\3\j\0\l\r\m\l\t\z\2\r\e\3\2\u\q\7\8\2\7\q\6\w\l\8\z\z\p\p\s\i\2\k\h\s\p\f\b\1\o\r\w\c\c\l\x\d\x\p\k\0\k\8\q\t\y\x\h\t\l\t\t\z\m\z\f\n\r\x\y\0\j\1\m\b\d\z\i\t\3\g\s\4\x\2\o\u\n\y\u\h\q\e\9\s\v\k\h\9\p\j\6\1\i\x\f\o\2\g\j\t\y\q\g\4\c\2\7\1\4\o\h\p\0\7\h\1\6\t\s\3\l\c\6\z\z\u\t\b\u\o\h\7\8\a\3\l\5\1\u\p\m\n\7\e\s\e\1\s\w\i\o\5\2\5\4\j\b\8\3\9\8\g\x\u\e\f\i\v\r\y\s\9\v\a\q\2\8\3\w\a\x\o\0\7\n\x\2\5\0\a\u\l\h\1\f\k\i\i\p\k\n\1\i\3\y\c\2\c\h\t\n\7\8\d\s\1\z\g\q\3\2\f\u\6\q\r\n\s\j\e\n\4\r\d\9\b\k\u\u\7\3\j\t\c\s\a\y\v\t\w\2\y\d\5\o\y\z\9\4\e\w\4\1\6\w\7\m\t\2\l\x\f\k\0\f\v\6\9\2\o ]] 00:28:11.408 10:42:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:11.408 10:42:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:11.667 [2024-07-12 10:42:05.354537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:11.667 [2024-07-12 10:42:05.354690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138698 ] 00:28:11.667 [2024-07-12 10:42:05.508617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.925 [2024-07-12 10:42:05.670519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.120  Copying: 512/512 [B] (average 166 kBps) 00:28:13.120 00:28:13.120 10:42:06 -- dd/posix.sh@93 -- # [[ x5kdah20jxzl90z59ucaegygyc7z4upn6m26ed12v1mio2ahebd4a70l0oqo9vww7bsskfj9g8r3abhkea3rbndg6x0r84sb9ant053u3s78ljcc1sipes18ljyewa3gqlkpurd3wuxyoxdg45jbzfiw25laixw26j1779gjj2nuryqkte91uhl48s4pzquvf10wukxtvc7k6bkoll34k8hug14de3j0lrmltz2re32uq7827q6wl8zzppsi2khspfb1orwcclxdxpk0k8qtyxhtlttzmzfnrxy0j1mbdzit3gs4x2ounyuhqe9svkh9pj61ixfo2gjtyqg4c2714ohp07h16ts3lc6zzutbuoh78a3l51upmn7ese1swio5254jb8398gxuefivrys9vaq283waxo07nx250aulh1fkiipkn1i3yc2chtn78ds1zgq32fu6qrnsjen4rd9bkuu73jtcsayvtw2yd5oyz94ew416w7mt2lxfk0fv692o == \x\5\k\d\a\h\2\0\j\x\z\l\9\0\z\5\9\u\c\a\e\g\y\g\y\c\7\z\4\u\p\n\6\m\2\6\e\d\1\2\v\1\m\i\o\2\a\h\e\b\d\4\a\7\0\l\0\o\q\o\9\v\w\w\7\b\s\s\k\f\j\9\g\8\r\3\a\b\h\k\e\a\3\r\b\n\d\g\6\x\0\r\8\4\s\b\9\a\n\t\0\5\3\u\3\s\7\8\l\j\c\c\1\s\i\p\e\s\1\8\l\j\y\e\w\a\3\g\q\l\k\p\u\r\d\3\w\u\x\y\o\x\d\g\4\5\j\b\z\f\i\w\2\5\l\a\i\x\w\2\6\j\1\7\7\9\g\j\j\2\n\u\r\y\q\k\t\e\9\1\u\h\l\4\8\s\4\p\z\q\u\v\f\1\0\w\u\k\x\t\v\c\7\k\6\b\k\o\l\l\3\4\k\8\h\u\g\1\4\d\e\3\j\0\l\r\m\l\t\z\2\r\e\3\2\u\q\7\8\2\7\q\6\w\l\8\z\z\p\p\s\i\2\k\h\s\p\f\b\1\o\r\w\c\c\l\x\d\x\p\k\0\k\8\q\t\y\x\h\t\l\t\t\z\m\z\f\n\r\x\y\0\j\1\m\b\d\z\i\t\3\g\s\4\x\2\o\u\n\y\u\h\q\e\9\s\v\k\h\9\p\j\6\1\i\x\f\o\2\g\j\t\y\q\g\4\c\2\7\1\4\o\h\p\0\7\h\1\6\t\s\3\l\c\6\z\z\u\t\b\u\o\h\7\8\a\3\l\5\1\u\p\m\n\7\e\s\e\1\s\w\i\o\5\2\5\4\j\b\8\3\9\8\g\x\u\e\f\i\v\r\y\s\9\v\a\q\2\8\3\w\a\x\o\0\7\n\x\2\5\0\a\u\l\h\1\f\k\i\i\p\k\n\1\i\3\y\c\2\c\h\t\n\7\8\d\s\1\z\g\q\3\2\f\u\6\q\r\n\s\j\e\n\4\r\d\9\b\k\u\u\7\3\j\t\c\s\a\y\v\t\w\2\y\d\5\o\y\z\9\4\e\w\4\1\6\w\7\m\t\2\l\x\f\k\0\f\v\6\9\2\o ]] 00:28:13.120 10:42:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:13.120 10:42:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:13.120 [2024-07-12 10:42:06.937275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:13.120 [2024-07-12 10:42:06.937489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138715 ] 00:28:13.379 [2024-07-12 10:42:07.105345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.379 [2024-07-12 10:42:07.276768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.570  Copying: 512/512 [B] (average 250 kBps) 00:28:14.570 00:28:14.570 ************************************ 00:28:14.570 END TEST dd_flags_misc 00:28:14.570 ************************************ 00:28:14.571 10:42:08 -- dd/posix.sh@93 -- # [[ x5kdah20jxzl90z59ucaegygyc7z4upn6m26ed12v1mio2ahebd4a70l0oqo9vww7bsskfj9g8r3abhkea3rbndg6x0r84sb9ant053u3s78ljcc1sipes18ljyewa3gqlkpurd3wuxyoxdg45jbzfiw25laixw26j1779gjj2nuryqkte91uhl48s4pzquvf10wukxtvc7k6bkoll34k8hug14de3j0lrmltz2re32uq7827q6wl8zzppsi2khspfb1orwcclxdxpk0k8qtyxhtlttzmzfnrxy0j1mbdzit3gs4x2ounyuhqe9svkh9pj61ixfo2gjtyqg4c2714ohp07h16ts3lc6zzutbuoh78a3l51upmn7ese1swio5254jb8398gxuefivrys9vaq283waxo07nx250aulh1fkiipkn1i3yc2chtn78ds1zgq32fu6qrnsjen4rd9bkuu73jtcsayvtw2yd5oyz94ew416w7mt2lxfk0fv692o == \x\5\k\d\a\h\2\0\j\x\z\l\9\0\z\5\9\u\c\a\e\g\y\g\y\c\7\z\4\u\p\n\6\m\2\6\e\d\1\2\v\1\m\i\o\2\a\h\e\b\d\4\a\7\0\l\0\o\q\o\9\v\w\w\7\b\s\s\k\f\j\9\g\8\r\3\a\b\h\k\e\a\3\r\b\n\d\g\6\x\0\r\8\4\s\b\9\a\n\t\0\5\3\u\3\s\7\8\l\j\c\c\1\s\i\p\e\s\1\8\l\j\y\e\w\a\3\g\q\l\k\p\u\r\d\3\w\u\x\y\o\x\d\g\4\5\j\b\z\f\i\w\2\5\l\a\i\x\w\2\6\j\1\7\7\9\g\j\j\2\n\u\r\y\q\k\t\e\9\1\u\h\l\4\8\s\4\p\z\q\u\v\f\1\0\w\u\k\x\t\v\c\7\k\6\b\k\o\l\l\3\4\k\8\h\u\g\1\4\d\e\3\j\0\l\r\m\l\t\z\2\r\e\3\2\u\q\7\8\2\7\q\6\w\l\8\z\z\p\p\s\i\2\k\h\s\p\f\b\1\o\r\w\c\c\l\x\d\x\p\k\0\k\8\q\t\y\x\h\t\l\t\t\z\m\z\f\n\r\x\y\0\j\1\m\b\d\z\i\t\3\g\s\4\x\2\o\u\n\y\u\h\q\e\9\s\v\k\h\9\p\j\6\1\i\x\f\o\2\g\j\t\y\q\g\4\c\2\7\1\4\o\h\p\0\7\h\1\6\t\s\3\l\c\6\z\z\u\t\b\u\o\h\7\8\a\3\l\5\1\u\p\m\n\7\e\s\e\1\s\w\i\o\5\2\5\4\j\b\8\3\9\8\g\x\u\e\f\i\v\r\y\s\9\v\a\q\2\8\3\w\a\x\o\0\7\n\x\2\5\0\a\u\l\h\1\f\k\i\i\p\k\n\1\i\3\y\c\2\c\h\t\n\7\8\d\s\1\z\g\q\3\2\f\u\6\q\r\n\s\j\e\n\4\r\d\9\b\k\u\u\7\3\j\t\c\s\a\y\v\t\w\2\y\d\5\o\y\z\9\4\e\w\4\1\6\w\7\m\t\2\l\x\f\k\0\f\v\6\9\2\o ]] 00:28:14.571 00:28:14.571 real 0m12.720s 00:28:14.571 user 0m9.861s 00:28:14.571 sys 0m1.775s 00:28:14.571 10:42:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.571 10:42:08 -- common/autotest_common.sh@10 -- # set +x 00:28:14.828 10:42:08 -- dd/posix.sh@131 -- # tests_forced_aio 00:28:14.828 10:42:08 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:28:14.828 * Second test run, using AIO 00:28:14.828 10:42:08 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:28:14.828 10:42:08 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:28:14.828 10:42:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:14.828 10:42:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.828 10:42:08 -- common/autotest_common.sh@10 -- # set +x 00:28:14.828 ************************************ 00:28:14.828 START TEST dd_flag_append_forced_aio 00:28:14.828 ************************************ 00:28:14.828 10:42:08 -- common/autotest_common.sh@1104 -- # append 00:28:14.828 10:42:08 -- dd/posix.sh@16 -- # local dump0 00:28:14.828 10:42:08 -- dd/posix.sh@17 -- # local dump1 00:28:14.828 10:42:08 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:14.828 10:42:08 -- dd/common.sh@98 -- # xtrace_disable 00:28:14.828 10:42:08 -- common/autotest_common.sh@10 -- # set +x 00:28:14.828 10:42:08 -- dd/posix.sh@19 -- # dump0=v2b35wtb4ayzd6lhwmne66r4rhv6zk0q 00:28:14.828 10:42:08 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:14.828 10:42:08 -- dd/common.sh@98 -- # xtrace_disable 00:28:14.828 10:42:08 -- common/autotest_common.sh@10 -- # set +x 00:28:14.828 10:42:08 -- dd/posix.sh@20 -- # dump1=u1fbkxtogbp5i989f9adkqph427zwygt 00:28:14.828 10:42:08 -- dd/posix.sh@22 -- # printf %s v2b35wtb4ayzd6lhwmne66r4rhv6zk0q 00:28:14.828 10:42:08 -- dd/posix.sh@23 -- # printf %s u1fbkxtogbp5i989f9adkqph427zwygt 00:28:14.828 10:42:08 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:14.828 [2024-07-12 10:42:08.594485] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.828 [2024-07-12 10:42:08.595426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138766 ] 00:28:15.085 [2024-07-12 10:42:08.760945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.085 [2024-07-12 10:42:08.916904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.272  Copying: 32/32 [B] (average 31 kBps) 00:28:16.272 00:28:16.272 10:42:10 -- dd/posix.sh@27 -- # [[ u1fbkxtogbp5i989f9adkqph427zwygtv2b35wtb4ayzd6lhwmne66r4rhv6zk0q == \u\1\f\b\k\x\t\o\g\b\p\5\i\9\8\9\f\9\a\d\k\q\p\h\4\2\7\z\w\y\g\t\v\2\b\3\5\w\t\b\4\a\y\z\d\6\l\h\w\m\n\e\6\6\r\4\r\h\v\6\z\k\0\q ]] 00:28:16.272 00:28:16.272 real 0m1.594s 00:28:16.272 user 0m1.203s 00:28:16.272 sys 0m0.242s 00:28:16.272 ************************************ 00:28:16.272 END TEST dd_flag_append_forced_aio 00:28:16.272 ************************************ 00:28:16.272 10:42:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.272 10:42:10 -- common/autotest_common.sh@10 -- # set +x 00:28:16.272 10:42:10 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:28:16.272 10:42:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:16.273 10:42:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.273 10:42:10 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 ************************************ 00:28:16.273 START TEST dd_flag_directory_forced_aio 00:28:16.273 ************************************ 00:28:16.273 10:42:10 -- common/autotest_common.sh@1104 -- # directory 00:28:16.273 10:42:10 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:16.273 10:42:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:16.273 10:42:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:16.273 10:42:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.273 10:42:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.273 10:42:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.530 10:42:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.530 10:42:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.530 10:42:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:16.530 10:42:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.530 10:42:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:16.530 10:42:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:16.530 [2024-07-12 10:42:10.229573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:16.530 [2024-07-12 10:42:10.229714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138816 ] 00:28:16.530 [2024-07-12 10:42:10.384825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.788 [2024-07-12 10:42:10.602035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.045 [2024-07-12 10:42:10.848097] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:17.045 [2024-07-12 10:42:10.848169] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:17.045 [2024-07-12 10:42:10.848192] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:17.612 [2024-07-12 10:42:11.435898] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:18.179 10:42:11 -- common/autotest_common.sh@643 -- # es=236 00:28:18.179 10:42:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:18.179 10:42:11 -- common/autotest_common.sh@652 -- # es=108 00:28:18.179 10:42:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:18.179 10:42:11 -- common/autotest_common.sh@660 -- # es=1 00:28:18.179 10:42:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:18.179 10:42:11 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:18.179 10:42:11 -- common/autotest_common.sh@640 -- # local es=0 00:28:18.179 10:42:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:18.179 10:42:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.179 10:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:18.179 10:42:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.179 10:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:18.179 10:42:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.179 10:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:18.179 10:42:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.179 10:42:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:18.179 10:42:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:18.179 [2024-07-12 10:42:11.897007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:18.179 [2024-07-12 10:42:11.897170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138850 ] 00:28:18.179 [2024-07-12 10:42:12.049413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.438 [2024-07-12 10:42:12.227741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.696 [2024-07-12 10:42:12.511107] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:18.696 [2024-07-12 10:42:12.511196] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:18.696 [2024-07-12 10:42:12.511227] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:19.265 [2024-07-12 10:42:13.138127] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:19.831 10:42:13 -- common/autotest_common.sh@643 -- # es=236 00:28:19.831 10:42:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:19.831 10:42:13 -- common/autotest_common.sh@652 -- # es=108 00:28:19.831 10:42:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:19.831 10:42:13 -- common/autotest_common.sh@660 -- # es=1 00:28:19.831 10:42:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:19.831 00:28:19.831 real 0m3.322s 00:28:19.831 user 0m2.632s 00:28:19.831 sys 0m0.490s 00:28:19.831 ************************************ 00:28:19.831 END TEST dd_flag_directory_forced_aio 00:28:19.831 ************************************ 00:28:19.831 10:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.831 10:42:13 -- common/autotest_common.sh@10 -- # set +x 00:28:19.831 10:42:13 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:19.831 10:42:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:19.832 10:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.832 10:42:13 -- common/autotest_common.sh@10 -- # set +x 00:28:19.832 ************************************ 00:28:19.832 START TEST dd_flag_nofollow_forced_aio 00:28:19.832 ************************************ 00:28:19.832 10:42:13 -- common/autotest_common.sh@1104 -- # nofollow 00:28:19.832 10:42:13 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:19.832 10:42:13 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:19.832 10:42:13 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:19.832 10:42:13 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:19.832 10:42:13 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:19.832 10:42:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:19.832 10:42:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:19.832 10:42:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.832 10:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.832 10:42:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.832 10:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.832 10:42:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.832 10:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.832 10:42:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:19.832 10:42:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:19.832 10:42:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:19.832 [2024-07-12 10:42:13.626824] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:19.832 [2024-07-12 10:42:13.627024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138901 ] 00:28:20.089 [2024-07-12 10:42:13.793480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.089 [2024-07-12 10:42:13.972442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.348 [2024-07-12 10:42:14.254282] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:20.348 [2024-07-12 10:42:14.254374] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:20.348 [2024-07-12 10:42:14.254401] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.282 [2024-07-12 10:42:14.882323] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:21.541 10:42:15 -- common/autotest_common.sh@643 -- # es=216 00:28:21.541 10:42:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:21.541 10:42:15 -- common/autotest_common.sh@652 -- # es=88 00:28:21.541 10:42:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:21.541 10:42:15 -- common/autotest_common.sh@660 -- # es=1 00:28:21.541 10:42:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:21.541 10:42:15 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:21.541 10:42:15 -- common/autotest_common.sh@640 -- # local es=0 00:28:21.541 10:42:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:21.541 10:42:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.541 10:42:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:21.541 10:42:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.541 10:42:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:21.541 10:42:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.541 10:42:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:21.541 10:42:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.541 10:42:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:21.541 10:42:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:21.541 [2024-07-12 10:42:15.315997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:21.541 [2024-07-12 10:42:15.316195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138921 ] 00:28:21.801 [2024-07-12 10:42:15.487731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.801 [2024-07-12 10:42:15.706733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.369 [2024-07-12 10:42:15.989552] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:22.369 [2024-07-12 10:42:15.989899] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:22.369 [2024-07-12 10:42:15.989966] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:22.938 [2024-07-12 10:42:16.619694] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:23.197 10:42:16 -- common/autotest_common.sh@643 -- # es=216 00:28:23.197 10:42:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:23.197 10:42:16 -- common/autotest_common.sh@652 -- # es=88 00:28:23.197 10:42:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:23.197 10:42:16 -- common/autotest_common.sh@660 -- # es=1 00:28:23.197 10:42:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:23.197 10:42:16 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:23.197 10:42:16 -- dd/common.sh@98 -- # xtrace_disable 00:28:23.197 10:42:16 -- common/autotest_common.sh@10 -- # set +x 00:28:23.197 10:42:16 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:23.197 [2024-07-12 10:42:17.048668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:23.197 [2024-07-12 10:42:17.049109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138943 ] 00:28:23.456 [2024-07-12 10:42:17.202038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.715 [2024-07-12 10:42:17.387960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.910  Copying: 512/512 [B] (average 500 kBps) 00:28:24.910 00:28:24.910 ************************************ 00:28:24.910 END TEST dd_flag_nofollow_forced_aio 00:28:24.910 ************************************ 00:28:24.910 10:42:18 -- dd/posix.sh@49 -- # [[ xedz571ce5edwwnwfutf6rek9dwefytz31mxykh9vqql6g29j6ceolggjrd647y8od68lwmj6yv6vwvwxggvbmyh41tqwsnweasnm2o4fpob62szl97weltehb2k83x93tqxu6d2tht3hfff31tul0iepj2s9lug3vx4v1tcj5res6vg8i3ploykdhl7dqxt9rwe31uxipz39rc5lgjlo5n1viu97aruy0kijvdaq6r6yc0vd8ug2t4mpps7kkhob5bhytim2pc6gcpyqsojey5udlua0ts4l4oymgjxjvw2d742s7up8y1o1zcgdh7gl5hbrrhahky22z315gr96tigwgw4b8e4cgtkglpada89sglh3faqinn28r1c7r4sxs4musb08lyrtfnqb5hqn3wkdcijpkgsr7hy5vq7ufys8cjsqt4wi7wx1hzuzfy6g49nxscxybyomshefd7xg0nh36vi264sodlhjvhdpip7mkfc2on25vxsnqklk99x == \x\e\d\z\5\7\1\c\e\5\e\d\w\w\n\w\f\u\t\f\6\r\e\k\9\d\w\e\f\y\t\z\3\1\m\x\y\k\h\9\v\q\q\l\6\g\2\9\j\6\c\e\o\l\g\g\j\r\d\6\4\7\y\8\o\d\6\8\l\w\m\j\6\y\v\6\v\w\v\w\x\g\g\v\b\m\y\h\4\1\t\q\w\s\n\w\e\a\s\n\m\2\o\4\f\p\o\b\6\2\s\z\l\9\7\w\e\l\t\e\h\b\2\k\8\3\x\9\3\t\q\x\u\6\d\2\t\h\t\3\h\f\f\f\3\1\t\u\l\0\i\e\p\j\2\s\9\l\u\g\3\v\x\4\v\1\t\c\j\5\r\e\s\6\v\g\8\i\3\p\l\o\y\k\d\h\l\7\d\q\x\t\9\r\w\e\3\1\u\x\i\p\z\3\9\r\c\5\l\g\j\l\o\5\n\1\v\i\u\9\7\a\r\u\y\0\k\i\j\v\d\a\q\6\r\6\y\c\0\v\d\8\u\g\2\t\4\m\p\p\s\7\k\k\h\o\b\5\b\h\y\t\i\m\2\p\c\6\g\c\p\y\q\s\o\j\e\y\5\u\d\l\u\a\0\t\s\4\l\4\o\y\m\g\j\x\j\v\w\2\d\7\4\2\s\7\u\p\8\y\1\o\1\z\c\g\d\h\7\g\l\5\h\b\r\r\h\a\h\k\y\2\2\z\3\1\5\g\r\9\6\t\i\g\w\g\w\4\b\8\e\4\c\g\t\k\g\l\p\a\d\a\8\9\s\g\l\h\3\f\a\q\i\n\n\2\8\r\1\c\7\r\4\s\x\s\4\m\u\s\b\0\8\l\y\r\t\f\n\q\b\5\h\q\n\3\w\k\d\c\i\j\p\k\g\s\r\7\h\y\5\v\q\7\u\f\y\s\8\c\j\s\q\t\4\w\i\7\w\x\1\h\z\u\z\f\y\6\g\4\9\n\x\s\c\x\y\b\y\o\m\s\h\e\f\d\7\x\g\0\n\h\3\6\v\i\2\6\4\s\o\d\l\h\j\v\h\d\p\i\p\7\m\k\f\c\2\o\n\2\5\v\x\s\n\q\k\l\k\9\9\x ]] 00:28:24.910 00:28:24.910 real 0m5.152s 00:28:24.910 user 0m4.062s 00:28:24.910 sys 0m0.755s 00:28:24.910 10:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.910 10:42:18 -- common/autotest_common.sh@10 -- # set +x 00:28:24.910 10:42:18 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:24.910 10:42:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.910 10:42:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.910 10:42:18 -- common/autotest_common.sh@10 -- # set +x 00:28:24.910 ************************************ 00:28:24.910 START TEST dd_flag_noatime_forced_aio 00:28:24.910 ************************************ 00:28:24.910 10:42:18 -- common/autotest_common.sh@1104 -- # noatime 00:28:24.910 10:42:18 -- dd/posix.sh@53 -- # local atime_if 00:28:24.910 10:42:18 -- dd/posix.sh@54 -- # local atime_of 00:28:24.910 10:42:18 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:24.910 10:42:18 -- dd/common.sh@98 -- # xtrace_disable 00:28:24.910 10:42:18 -- common/autotest_common.sh@10 -- # set +x 00:28:24.910 10:42:18 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:24.910 10:42:18 -- dd/posix.sh@60 -- # atime_if=1720780937 00:28:24.910 10:42:18 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:24.910 10:42:18 -- dd/posix.sh@61 -- # atime_of=1720780938 00:28:24.910 10:42:18 -- dd/posix.sh@66 -- # sleep 1 00:28:26.285 10:42:19 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:26.285 [2024-07-12 10:42:19.835813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:26.285 [2024-07-12 10:42:19.836181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139011 ] 00:28:26.285 [2024-07-12 10:42:19.988359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.285 [2024-07-12 10:42:20.171106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.919  Copying: 512/512 [B] (average 500 kBps) 00:28:27.919 00:28:27.919 10:42:21 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:27.919 10:42:21 -- dd/posix.sh@69 -- # (( atime_if == 1720780937 )) 00:28:27.919 10:42:21 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:27.919 10:42:21 -- dd/posix.sh@70 -- # (( atime_of == 1720780938 )) 00:28:27.919 10:42:21 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:27.919 [2024-07-12 10:42:21.563421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:27.919 [2024-07-12 10:42:21.563863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139053 ] 00:28:27.919 [2024-07-12 10:42:21.730571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.190 [2024-07-12 10:42:21.921458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.435  Copying: 512/512 [B] (average 500 kBps) 00:28:29.435 00:28:29.435 10:42:23 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:29.435 ************************************ 00:28:29.435 END TEST dd_flag_noatime_forced_aio 00:28:29.435 ************************************ 00:28:29.435 10:42:23 -- dd/posix.sh@73 -- # (( atime_if < 1720780942 )) 00:28:29.435 00:28:29.435 real 0m4.489s 00:28:29.435 user 0m2.678s 00:28:29.435 sys 0m0.543s 00:28:29.435 10:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.435 10:42:23 -- common/autotest_common.sh@10 -- # set +x 00:28:29.435 10:42:23 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:29.435 10:42:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.435 10:42:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.435 10:42:23 -- common/autotest_common.sh@10 -- # set +x 00:28:29.435 ************************************ 00:28:29.435 START TEST dd_flags_misc_forced_aio 00:28:29.435 ************************************ 00:28:29.435 10:42:23 -- common/autotest_common.sh@1104 -- # io 00:28:29.435 10:42:23 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:29.435 10:42:23 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:29.435 10:42:23 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:29.435 10:42:23 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:29.435 10:42:23 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:29.435 10:42:23 -- dd/common.sh@98 -- # xtrace_disable 00:28:29.435 10:42:23 -- common/autotest_common.sh@10 -- # set +x 00:28:29.435 10:42:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:29.435 10:42:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:29.693 [2024-07-12 10:42:23.378099] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:29.693 [2024-07-12 10:42:23.378475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139096 ] 00:28:29.693 [2024-07-12 10:42:23.545674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.952 [2024-07-12 10:42:23.728641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.148  Copying: 512/512 [B] (average 500 kBps) 00:28:31.148 00:28:31.148 10:42:25 -- dd/posix.sh@93 -- # [[ 24yl4x8x94libhxme1m1yaqi79yddghlt2a4qf80ralpirl6wc9ej0hyzuznu1p44qzjstui752y2oz50yhi61wdmgv8xxxpwpgnybyjd63rpztu3bxsq0gbp0hou2cpm0g2fl7di7axj3toqzj67bvidocs59uzn3lpw8b9t3d6dhutt0q62rz4pt0hlv8k2xpw2rzyneer28ti7klogsrpj2xggoy3t6hffmm60cz7m3yq78m9zg12r60frtpa645okjjwakuebxh1xtrnoyeqgyferc8aw5z7eakolqoa58mskald0pm62v2j4qzb96wjjimlmvhp2ltgcz6wqfvkfj1n5u2b0f1986oicgahdblov6xpuirsjj1bbqewkdpz4kso6ce0sbevn7ie8a9cyfk63f9q70yc8n8ars7egcrwauwgccjzqmb5ul7h9y3qegnuy2vi826nvgmbsg38kwnfym8pnh6cbnwb0t63s4rpdbd7uyqtt0dcwp6w == \2\4\y\l\4\x\8\x\9\4\l\i\b\h\x\m\e\1\m\1\y\a\q\i\7\9\y\d\d\g\h\l\t\2\a\4\q\f\8\0\r\a\l\p\i\r\l\6\w\c\9\e\j\0\h\y\z\u\z\n\u\1\p\4\4\q\z\j\s\t\u\i\7\5\2\y\2\o\z\5\0\y\h\i\6\1\w\d\m\g\v\8\x\x\x\p\w\p\g\n\y\b\y\j\d\6\3\r\p\z\t\u\3\b\x\s\q\0\g\b\p\0\h\o\u\2\c\p\m\0\g\2\f\l\7\d\i\7\a\x\j\3\t\o\q\z\j\6\7\b\v\i\d\o\c\s\5\9\u\z\n\3\l\p\w\8\b\9\t\3\d\6\d\h\u\t\t\0\q\6\2\r\z\4\p\t\0\h\l\v\8\k\2\x\p\w\2\r\z\y\n\e\e\r\2\8\t\i\7\k\l\o\g\s\r\p\j\2\x\g\g\o\y\3\t\6\h\f\f\m\m\6\0\c\z\7\m\3\y\q\7\8\m\9\z\g\1\2\r\6\0\f\r\t\p\a\6\4\5\o\k\j\j\w\a\k\u\e\b\x\h\1\x\t\r\n\o\y\e\q\g\y\f\e\r\c\8\a\w\5\z\7\e\a\k\o\l\q\o\a\5\8\m\s\k\a\l\d\0\p\m\6\2\v\2\j\4\q\z\b\9\6\w\j\j\i\m\l\m\v\h\p\2\l\t\g\c\z\6\w\q\f\v\k\f\j\1\n\5\u\2\b\0\f\1\9\8\6\o\i\c\g\a\h\d\b\l\o\v\6\x\p\u\i\r\s\j\j\1\b\b\q\e\w\k\d\p\z\4\k\s\o\6\c\e\0\s\b\e\v\n\7\i\e\8\a\9\c\y\f\k\6\3\f\9\q\7\0\y\c\8\n\8\a\r\s\7\e\g\c\r\w\a\u\w\g\c\c\j\z\q\m\b\5\u\l\7\h\9\y\3\q\e\g\n\u\y\2\v\i\8\2\6\n\v\g\m\b\s\g\3\8\k\w\n\f\y\m\8\p\n\h\6\c\b\n\w\b\0\t\6\3\s\4\r\p\d\b\d\7\u\y\q\t\t\0\d\c\w\p\6\w ]] 00:28:31.148 10:42:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:31.148 10:42:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:31.407 [2024-07-12 10:42:25.117534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:31.407 [2024-07-12 10:42:25.118144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139129 ] 00:28:31.407 [2024-07-12 10:42:25.286108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.666 [2024-07-12 10:42:25.478577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.303  Copying: 512/512 [B] (average 500 kBps) 00:28:33.303 00:28:33.303 10:42:26 -- dd/posix.sh@93 -- # [[ 24yl4x8x94libhxme1m1yaqi79yddghlt2a4qf80ralpirl6wc9ej0hyzuznu1p44qzjstui752y2oz50yhi61wdmgv8xxxpwpgnybyjd63rpztu3bxsq0gbp0hou2cpm0g2fl7di7axj3toqzj67bvidocs59uzn3lpw8b9t3d6dhutt0q62rz4pt0hlv8k2xpw2rzyneer28ti7klogsrpj2xggoy3t6hffmm60cz7m3yq78m9zg12r60frtpa645okjjwakuebxh1xtrnoyeqgyferc8aw5z7eakolqoa58mskald0pm62v2j4qzb96wjjimlmvhp2ltgcz6wqfvkfj1n5u2b0f1986oicgahdblov6xpuirsjj1bbqewkdpz4kso6ce0sbevn7ie8a9cyfk63f9q70yc8n8ars7egcrwauwgccjzqmb5ul7h9y3qegnuy2vi826nvgmbsg38kwnfym8pnh6cbnwb0t63s4rpdbd7uyqtt0dcwp6w == \2\4\y\l\4\x\8\x\9\4\l\i\b\h\x\m\e\1\m\1\y\a\q\i\7\9\y\d\d\g\h\l\t\2\a\4\q\f\8\0\r\a\l\p\i\r\l\6\w\c\9\e\j\0\h\y\z\u\z\n\u\1\p\4\4\q\z\j\s\t\u\i\7\5\2\y\2\o\z\5\0\y\h\i\6\1\w\d\m\g\v\8\x\x\x\p\w\p\g\n\y\b\y\j\d\6\3\r\p\z\t\u\3\b\x\s\q\0\g\b\p\0\h\o\u\2\c\p\m\0\g\2\f\l\7\d\i\7\a\x\j\3\t\o\q\z\j\6\7\b\v\i\d\o\c\s\5\9\u\z\n\3\l\p\w\8\b\9\t\3\d\6\d\h\u\t\t\0\q\6\2\r\z\4\p\t\0\h\l\v\8\k\2\x\p\w\2\r\z\y\n\e\e\r\2\8\t\i\7\k\l\o\g\s\r\p\j\2\x\g\g\o\y\3\t\6\h\f\f\m\m\6\0\c\z\7\m\3\y\q\7\8\m\9\z\g\1\2\r\6\0\f\r\t\p\a\6\4\5\o\k\j\j\w\a\k\u\e\b\x\h\1\x\t\r\n\o\y\e\q\g\y\f\e\r\c\8\a\w\5\z\7\e\a\k\o\l\q\o\a\5\8\m\s\k\a\l\d\0\p\m\6\2\v\2\j\4\q\z\b\9\6\w\j\j\i\m\l\m\v\h\p\2\l\t\g\c\z\6\w\q\f\v\k\f\j\1\n\5\u\2\b\0\f\1\9\8\6\o\i\c\g\a\h\d\b\l\o\v\6\x\p\u\i\r\s\j\j\1\b\b\q\e\w\k\d\p\z\4\k\s\o\6\c\e\0\s\b\e\v\n\7\i\e\8\a\9\c\y\f\k\6\3\f\9\q\7\0\y\c\8\n\8\a\r\s\7\e\g\c\r\w\a\u\w\g\c\c\j\z\q\m\b\5\u\l\7\h\9\y\3\q\e\g\n\u\y\2\v\i\8\2\6\n\v\g\m\b\s\g\3\8\k\w\n\f\y\m\8\p\n\h\6\c\b\n\w\b\0\t\6\3\s\4\r\p\d\b\d\7\u\y\q\t\t\0\d\c\w\p\6\w ]] 00:28:33.303 10:42:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:33.303 10:42:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:33.303 [2024-07-12 10:42:26.877823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:33.303 [2024-07-12 10:42:26.878252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139153 ] 00:28:33.303 [2024-07-12 10:42:27.046237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.562 [2024-07-12 10:42:27.238061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.757  Copying: 512/512 [B] (average 250 kBps) 00:28:34.757 00:28:34.758 10:42:28 -- dd/posix.sh@93 -- # [[ 24yl4x8x94libhxme1m1yaqi79yddghlt2a4qf80ralpirl6wc9ej0hyzuznu1p44qzjstui752y2oz50yhi61wdmgv8xxxpwpgnybyjd63rpztu3bxsq0gbp0hou2cpm0g2fl7di7axj3toqzj67bvidocs59uzn3lpw8b9t3d6dhutt0q62rz4pt0hlv8k2xpw2rzyneer28ti7klogsrpj2xggoy3t6hffmm60cz7m3yq78m9zg12r60frtpa645okjjwakuebxh1xtrnoyeqgyferc8aw5z7eakolqoa58mskald0pm62v2j4qzb96wjjimlmvhp2ltgcz6wqfvkfj1n5u2b0f1986oicgahdblov6xpuirsjj1bbqewkdpz4kso6ce0sbevn7ie8a9cyfk63f9q70yc8n8ars7egcrwauwgccjzqmb5ul7h9y3qegnuy2vi826nvgmbsg38kwnfym8pnh6cbnwb0t63s4rpdbd7uyqtt0dcwp6w == \2\4\y\l\4\x\8\x\9\4\l\i\b\h\x\m\e\1\m\1\y\a\q\i\7\9\y\d\d\g\h\l\t\2\a\4\q\f\8\0\r\a\l\p\i\r\l\6\w\c\9\e\j\0\h\y\z\u\z\n\u\1\p\4\4\q\z\j\s\t\u\i\7\5\2\y\2\o\z\5\0\y\h\i\6\1\w\d\m\g\v\8\x\x\x\p\w\p\g\n\y\b\y\j\d\6\3\r\p\z\t\u\3\b\x\s\q\0\g\b\p\0\h\o\u\2\c\p\m\0\g\2\f\l\7\d\i\7\a\x\j\3\t\o\q\z\j\6\7\b\v\i\d\o\c\s\5\9\u\z\n\3\l\p\w\8\b\9\t\3\d\6\d\h\u\t\t\0\q\6\2\r\z\4\p\t\0\h\l\v\8\k\2\x\p\w\2\r\z\y\n\e\e\r\2\8\t\i\7\k\l\o\g\s\r\p\j\2\x\g\g\o\y\3\t\6\h\f\f\m\m\6\0\c\z\7\m\3\y\q\7\8\m\9\z\g\1\2\r\6\0\f\r\t\p\a\6\4\5\o\k\j\j\w\a\k\u\e\b\x\h\1\x\t\r\n\o\y\e\q\g\y\f\e\r\c\8\a\w\5\z\7\e\a\k\o\l\q\o\a\5\8\m\s\k\a\l\d\0\p\m\6\2\v\2\j\4\q\z\b\9\6\w\j\j\i\m\l\m\v\h\p\2\l\t\g\c\z\6\w\q\f\v\k\f\j\1\n\5\u\2\b\0\f\1\9\8\6\o\i\c\g\a\h\d\b\l\o\v\6\x\p\u\i\r\s\j\j\1\b\b\q\e\w\k\d\p\z\4\k\s\o\6\c\e\0\s\b\e\v\n\7\i\e\8\a\9\c\y\f\k\6\3\f\9\q\7\0\y\c\8\n\8\a\r\s\7\e\g\c\r\w\a\u\w\g\c\c\j\z\q\m\b\5\u\l\7\h\9\y\3\q\e\g\n\u\y\2\v\i\8\2\6\n\v\g\m\b\s\g\3\8\k\w\n\f\y\m\8\p\n\h\6\c\b\n\w\b\0\t\6\3\s\4\r\p\d\b\d\7\u\y\q\t\t\0\d\c\w\p\6\w ]] 00:28:34.758 10:42:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:34.758 10:42:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:34.758 [2024-07-12 10:42:28.626591] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:34.758 [2024-07-12 10:42:28.626957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139174 ] 00:28:35.017 [2024-07-12 10:42:28.794186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.276 [2024-07-12 10:42:28.986086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.468  Copying: 512/512 [B] (average 166 kBps) 00:28:36.468 00:28:36.468 10:42:30 -- dd/posix.sh@93 -- # [[ 24yl4x8x94libhxme1m1yaqi79yddghlt2a4qf80ralpirl6wc9ej0hyzuznu1p44qzjstui752y2oz50yhi61wdmgv8xxxpwpgnybyjd63rpztu3bxsq0gbp0hou2cpm0g2fl7di7axj3toqzj67bvidocs59uzn3lpw8b9t3d6dhutt0q62rz4pt0hlv8k2xpw2rzyneer28ti7klogsrpj2xggoy3t6hffmm60cz7m3yq78m9zg12r60frtpa645okjjwakuebxh1xtrnoyeqgyferc8aw5z7eakolqoa58mskald0pm62v2j4qzb96wjjimlmvhp2ltgcz6wqfvkfj1n5u2b0f1986oicgahdblov6xpuirsjj1bbqewkdpz4kso6ce0sbevn7ie8a9cyfk63f9q70yc8n8ars7egcrwauwgccjzqmb5ul7h9y3qegnuy2vi826nvgmbsg38kwnfym8pnh6cbnwb0t63s4rpdbd7uyqtt0dcwp6w == \2\4\y\l\4\x\8\x\9\4\l\i\b\h\x\m\e\1\m\1\y\a\q\i\7\9\y\d\d\g\h\l\t\2\a\4\q\f\8\0\r\a\l\p\i\r\l\6\w\c\9\e\j\0\h\y\z\u\z\n\u\1\p\4\4\q\z\j\s\t\u\i\7\5\2\y\2\o\z\5\0\y\h\i\6\1\w\d\m\g\v\8\x\x\x\p\w\p\g\n\y\b\y\j\d\6\3\r\p\z\t\u\3\b\x\s\q\0\g\b\p\0\h\o\u\2\c\p\m\0\g\2\f\l\7\d\i\7\a\x\j\3\t\o\q\z\j\6\7\b\v\i\d\o\c\s\5\9\u\z\n\3\l\p\w\8\b\9\t\3\d\6\d\h\u\t\t\0\q\6\2\r\z\4\p\t\0\h\l\v\8\k\2\x\p\w\2\r\z\y\n\e\e\r\2\8\t\i\7\k\l\o\g\s\r\p\j\2\x\g\g\o\y\3\t\6\h\f\f\m\m\6\0\c\z\7\m\3\y\q\7\8\m\9\z\g\1\2\r\6\0\f\r\t\p\a\6\4\5\o\k\j\j\w\a\k\u\e\b\x\h\1\x\t\r\n\o\y\e\q\g\y\f\e\r\c\8\a\w\5\z\7\e\a\k\o\l\q\o\a\5\8\m\s\k\a\l\d\0\p\m\6\2\v\2\j\4\q\z\b\9\6\w\j\j\i\m\l\m\v\h\p\2\l\t\g\c\z\6\w\q\f\v\k\f\j\1\n\5\u\2\b\0\f\1\9\8\6\o\i\c\g\a\h\d\b\l\o\v\6\x\p\u\i\r\s\j\j\1\b\b\q\e\w\k\d\p\z\4\k\s\o\6\c\e\0\s\b\e\v\n\7\i\e\8\a\9\c\y\f\k\6\3\f\9\q\7\0\y\c\8\n\8\a\r\s\7\e\g\c\r\w\a\u\w\g\c\c\j\z\q\m\b\5\u\l\7\h\9\y\3\q\e\g\n\u\y\2\v\i\8\2\6\n\v\g\m\b\s\g\3\8\k\w\n\f\y\m\8\p\n\h\6\c\b\n\w\b\0\t\6\3\s\4\r\p\d\b\d\7\u\y\q\t\t\0\d\c\w\p\6\w ]] 00:28:36.468 10:42:30 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:36.468 10:42:30 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:36.468 10:42:30 -- dd/common.sh@98 -- # xtrace_disable 00:28:36.468 10:42:30 -- common/autotest_common.sh@10 -- # set +x 00:28:36.468 10:42:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:36.468 10:42:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:36.726 [2024-07-12 10:42:30.387196] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:36.726 [2024-07-12 10:42:30.387611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139199 ] 00:28:36.726 [2024-07-12 10:42:30.540942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.984 [2024-07-12 10:42:30.729051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.177  Copying: 512/512 [B] (average 500 kBps) 00:28:38.177 00:28:38.177 10:42:32 -- dd/posix.sh@93 -- # [[ fs8zltzn9es6dcuni7d7ftwm35zbxbtvgs9xaxnn3zvfnypt121ajptkgdh6bt9ndf8f7p9kyphlb1joy50mv0tt0h0syatye405v99mt21akkidhpoq8r0bmcejirgwjdx5j571eszje2nc639azlrpskxu8r3aut0mldxi4md6cedhor96592e9f73b2jtq92rz4d2mrfwahv6i72xg53xcc46iolf3m2zkulmjd1n0q7lliaxv2xhqv838y07odvy5ue26g4zb3ocihp3vuiyo2yvr2g4i7van23g0tjyvqq4cafw3330yg5sbv99eze01eyy5jb4kah0j614auuhoso2nb2qgnal7n99zrsbac41sfvejoxqr6e8ycmvxnyb9ynzrl33r1tg78z9ub12nd6prckxmh8247zm4d52qa0gplyksaogjdl8p1bfuzdtb3t6jewwcqbmo08nepikttyly514bosu7l41a9wzvr2zkpb6ktsdg7d6zwq8 == \f\s\8\z\l\t\z\n\9\e\s\6\d\c\u\n\i\7\d\7\f\t\w\m\3\5\z\b\x\b\t\v\g\s\9\x\a\x\n\n\3\z\v\f\n\y\p\t\1\2\1\a\j\p\t\k\g\d\h\6\b\t\9\n\d\f\8\f\7\p\9\k\y\p\h\l\b\1\j\o\y\5\0\m\v\0\t\t\0\h\0\s\y\a\t\y\e\4\0\5\v\9\9\m\t\2\1\a\k\k\i\d\h\p\o\q\8\r\0\b\m\c\e\j\i\r\g\w\j\d\x\5\j\5\7\1\e\s\z\j\e\2\n\c\6\3\9\a\z\l\r\p\s\k\x\u\8\r\3\a\u\t\0\m\l\d\x\i\4\m\d\6\c\e\d\h\o\r\9\6\5\9\2\e\9\f\7\3\b\2\j\t\q\9\2\r\z\4\d\2\m\r\f\w\a\h\v\6\i\7\2\x\g\5\3\x\c\c\4\6\i\o\l\f\3\m\2\z\k\u\l\m\j\d\1\n\0\q\7\l\l\i\a\x\v\2\x\h\q\v\8\3\8\y\0\7\o\d\v\y\5\u\e\2\6\g\4\z\b\3\o\c\i\h\p\3\v\u\i\y\o\2\y\v\r\2\g\4\i\7\v\a\n\2\3\g\0\t\j\y\v\q\q\4\c\a\f\w\3\3\3\0\y\g\5\s\b\v\9\9\e\z\e\0\1\e\y\y\5\j\b\4\k\a\h\0\j\6\1\4\a\u\u\h\o\s\o\2\n\b\2\q\g\n\a\l\7\n\9\9\z\r\s\b\a\c\4\1\s\f\v\e\j\o\x\q\r\6\e\8\y\c\m\v\x\n\y\b\9\y\n\z\r\l\3\3\r\1\t\g\7\8\z\9\u\b\1\2\n\d\6\p\r\c\k\x\m\h\8\2\4\7\z\m\4\d\5\2\q\a\0\g\p\l\y\k\s\a\o\g\j\d\l\8\p\1\b\f\u\z\d\t\b\3\t\6\j\e\w\w\c\q\b\m\o\0\8\n\e\p\i\k\t\t\y\l\y\5\1\4\b\o\s\u\7\l\4\1\a\9\w\z\v\r\2\z\k\p\b\6\k\t\s\d\g\7\d\6\z\w\q\8 ]] 00:28:38.177 10:42:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:38.177 10:42:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:38.436 [2024-07-12 10:42:32.114335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:38.436 [2024-07-12 10:42:32.114840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139238 ] 00:28:38.436 [2024-07-12 10:42:32.285039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.695 [2024-07-12 10:42:32.473013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.894  Copying: 512/512 [B] (average 500 kBps) 00:28:39.894 00:28:39.894 10:42:33 -- dd/posix.sh@93 -- # [[ fs8zltzn9es6dcuni7d7ftwm35zbxbtvgs9xaxnn3zvfnypt121ajptkgdh6bt9ndf8f7p9kyphlb1joy50mv0tt0h0syatye405v99mt21akkidhpoq8r0bmcejirgwjdx5j571eszje2nc639azlrpskxu8r3aut0mldxi4md6cedhor96592e9f73b2jtq92rz4d2mrfwahv6i72xg53xcc46iolf3m2zkulmjd1n0q7lliaxv2xhqv838y07odvy5ue26g4zb3ocihp3vuiyo2yvr2g4i7van23g0tjyvqq4cafw3330yg5sbv99eze01eyy5jb4kah0j614auuhoso2nb2qgnal7n99zrsbac41sfvejoxqr6e8ycmvxnyb9ynzrl33r1tg78z9ub12nd6prckxmh8247zm4d52qa0gplyksaogjdl8p1bfuzdtb3t6jewwcqbmo08nepikttyly514bosu7l41a9wzvr2zkpb6ktsdg7d6zwq8 == \f\s\8\z\l\t\z\n\9\e\s\6\d\c\u\n\i\7\d\7\f\t\w\m\3\5\z\b\x\b\t\v\g\s\9\x\a\x\n\n\3\z\v\f\n\y\p\t\1\2\1\a\j\p\t\k\g\d\h\6\b\t\9\n\d\f\8\f\7\p\9\k\y\p\h\l\b\1\j\o\y\5\0\m\v\0\t\t\0\h\0\s\y\a\t\y\e\4\0\5\v\9\9\m\t\2\1\a\k\k\i\d\h\p\o\q\8\r\0\b\m\c\e\j\i\r\g\w\j\d\x\5\j\5\7\1\e\s\z\j\e\2\n\c\6\3\9\a\z\l\r\p\s\k\x\u\8\r\3\a\u\t\0\m\l\d\x\i\4\m\d\6\c\e\d\h\o\r\9\6\5\9\2\e\9\f\7\3\b\2\j\t\q\9\2\r\z\4\d\2\m\r\f\w\a\h\v\6\i\7\2\x\g\5\3\x\c\c\4\6\i\o\l\f\3\m\2\z\k\u\l\m\j\d\1\n\0\q\7\l\l\i\a\x\v\2\x\h\q\v\8\3\8\y\0\7\o\d\v\y\5\u\e\2\6\g\4\z\b\3\o\c\i\h\p\3\v\u\i\y\o\2\y\v\r\2\g\4\i\7\v\a\n\2\3\g\0\t\j\y\v\q\q\4\c\a\f\w\3\3\3\0\y\g\5\s\b\v\9\9\e\z\e\0\1\e\y\y\5\j\b\4\k\a\h\0\j\6\1\4\a\u\u\h\o\s\o\2\n\b\2\q\g\n\a\l\7\n\9\9\z\r\s\b\a\c\4\1\s\f\v\e\j\o\x\q\r\6\e\8\y\c\m\v\x\n\y\b\9\y\n\z\r\l\3\3\r\1\t\g\7\8\z\9\u\b\1\2\n\d\6\p\r\c\k\x\m\h\8\2\4\7\z\m\4\d\5\2\q\a\0\g\p\l\y\k\s\a\o\g\j\d\l\8\p\1\b\f\u\z\d\t\b\3\t\6\j\e\w\w\c\q\b\m\o\0\8\n\e\p\i\k\t\t\y\l\y\5\1\4\b\o\s\u\7\l\4\1\a\9\w\z\v\r\2\z\k\p\b\6\k\t\s\d\g\7\d\6\z\w\q\8 ]] 00:28:39.894 10:42:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:39.894 10:42:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:40.152 [2024-07-12 10:42:33.848688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:40.153 [2024-07-12 10:42:33.849150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139267 ] 00:28:40.153 [2024-07-12 10:42:34.014935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.411 [2024-07-12 10:42:34.205475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.604  Copying: 512/512 [B] (average 83 kBps) 00:28:41.604 00:28:41.863 10:42:35 -- dd/posix.sh@93 -- # [[ fs8zltzn9es6dcuni7d7ftwm35zbxbtvgs9xaxnn3zvfnypt121ajptkgdh6bt9ndf8f7p9kyphlb1joy50mv0tt0h0syatye405v99mt21akkidhpoq8r0bmcejirgwjdx5j571eszje2nc639azlrpskxu8r3aut0mldxi4md6cedhor96592e9f73b2jtq92rz4d2mrfwahv6i72xg53xcc46iolf3m2zkulmjd1n0q7lliaxv2xhqv838y07odvy5ue26g4zb3ocihp3vuiyo2yvr2g4i7van23g0tjyvqq4cafw3330yg5sbv99eze01eyy5jb4kah0j614auuhoso2nb2qgnal7n99zrsbac41sfvejoxqr6e8ycmvxnyb9ynzrl33r1tg78z9ub12nd6prckxmh8247zm4d52qa0gplyksaogjdl8p1bfuzdtb3t6jewwcqbmo08nepikttyly514bosu7l41a9wzvr2zkpb6ktsdg7d6zwq8 == \f\s\8\z\l\t\z\n\9\e\s\6\d\c\u\n\i\7\d\7\f\t\w\m\3\5\z\b\x\b\t\v\g\s\9\x\a\x\n\n\3\z\v\f\n\y\p\t\1\2\1\a\j\p\t\k\g\d\h\6\b\t\9\n\d\f\8\f\7\p\9\k\y\p\h\l\b\1\j\o\y\5\0\m\v\0\t\t\0\h\0\s\y\a\t\y\e\4\0\5\v\9\9\m\t\2\1\a\k\k\i\d\h\p\o\q\8\r\0\b\m\c\e\j\i\r\g\w\j\d\x\5\j\5\7\1\e\s\z\j\e\2\n\c\6\3\9\a\z\l\r\p\s\k\x\u\8\r\3\a\u\t\0\m\l\d\x\i\4\m\d\6\c\e\d\h\o\r\9\6\5\9\2\e\9\f\7\3\b\2\j\t\q\9\2\r\z\4\d\2\m\r\f\w\a\h\v\6\i\7\2\x\g\5\3\x\c\c\4\6\i\o\l\f\3\m\2\z\k\u\l\m\j\d\1\n\0\q\7\l\l\i\a\x\v\2\x\h\q\v\8\3\8\y\0\7\o\d\v\y\5\u\e\2\6\g\4\z\b\3\o\c\i\h\p\3\v\u\i\y\o\2\y\v\r\2\g\4\i\7\v\a\n\2\3\g\0\t\j\y\v\q\q\4\c\a\f\w\3\3\3\0\y\g\5\s\b\v\9\9\e\z\e\0\1\e\y\y\5\j\b\4\k\a\h\0\j\6\1\4\a\u\u\h\o\s\o\2\n\b\2\q\g\n\a\l\7\n\9\9\z\r\s\b\a\c\4\1\s\f\v\e\j\o\x\q\r\6\e\8\y\c\m\v\x\n\y\b\9\y\n\z\r\l\3\3\r\1\t\g\7\8\z\9\u\b\1\2\n\d\6\p\r\c\k\x\m\h\8\2\4\7\z\m\4\d\5\2\q\a\0\g\p\l\y\k\s\a\o\g\j\d\l\8\p\1\b\f\u\z\d\t\b\3\t\6\j\e\w\w\c\q\b\m\o\0\8\n\e\p\i\k\t\t\y\l\y\5\1\4\b\o\s\u\7\l\4\1\a\9\w\z\v\r\2\z\k\p\b\6\k\t\s\d\g\7\d\6\z\w\q\8 ]] 00:28:41.863 10:42:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:41.863 10:42:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:41.863 [2024-07-12 10:42:35.595614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:41.863 [2024-07-12 10:42:35.596109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139284 ] 00:28:41.863 [2024-07-12 10:42:35.762736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.121 [2024-07-12 10:42:35.954980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.756  Copying: 512/512 [B] (average 250 kBps) 00:28:43.756 00:28:43.756 ************************************ 00:28:43.756 END TEST dd_flags_misc_forced_aio 00:28:43.756 ************************************ 00:28:43.756 10:42:37 -- dd/posix.sh@93 -- # [[ fs8zltzn9es6dcuni7d7ftwm35zbxbtvgs9xaxnn3zvfnypt121ajptkgdh6bt9ndf8f7p9kyphlb1joy50mv0tt0h0syatye405v99mt21akkidhpoq8r0bmcejirgwjdx5j571eszje2nc639azlrpskxu8r3aut0mldxi4md6cedhor96592e9f73b2jtq92rz4d2mrfwahv6i72xg53xcc46iolf3m2zkulmjd1n0q7lliaxv2xhqv838y07odvy5ue26g4zb3ocihp3vuiyo2yvr2g4i7van23g0tjyvqq4cafw3330yg5sbv99eze01eyy5jb4kah0j614auuhoso2nb2qgnal7n99zrsbac41sfvejoxqr6e8ycmvxnyb9ynzrl33r1tg78z9ub12nd6prckxmh8247zm4d52qa0gplyksaogjdl8p1bfuzdtb3t6jewwcqbmo08nepikttyly514bosu7l41a9wzvr2zkpb6ktsdg7d6zwq8 == \f\s\8\z\l\t\z\n\9\e\s\6\d\c\u\n\i\7\d\7\f\t\w\m\3\5\z\b\x\b\t\v\g\s\9\x\a\x\n\n\3\z\v\f\n\y\p\t\1\2\1\a\j\p\t\k\g\d\h\6\b\t\9\n\d\f\8\f\7\p\9\k\y\p\h\l\b\1\j\o\y\5\0\m\v\0\t\t\0\h\0\s\y\a\t\y\e\4\0\5\v\9\9\m\t\2\1\a\k\k\i\d\h\p\o\q\8\r\0\b\m\c\e\j\i\r\g\w\j\d\x\5\j\5\7\1\e\s\z\j\e\2\n\c\6\3\9\a\z\l\r\p\s\k\x\u\8\r\3\a\u\t\0\m\l\d\x\i\4\m\d\6\c\e\d\h\o\r\9\6\5\9\2\e\9\f\7\3\b\2\j\t\q\9\2\r\z\4\d\2\m\r\f\w\a\h\v\6\i\7\2\x\g\5\3\x\c\c\4\6\i\o\l\f\3\m\2\z\k\u\l\m\j\d\1\n\0\q\7\l\l\i\a\x\v\2\x\h\q\v\8\3\8\y\0\7\o\d\v\y\5\u\e\2\6\g\4\z\b\3\o\c\i\h\p\3\v\u\i\y\o\2\y\v\r\2\g\4\i\7\v\a\n\2\3\g\0\t\j\y\v\q\q\4\c\a\f\w\3\3\3\0\y\g\5\s\b\v\9\9\e\z\e\0\1\e\y\y\5\j\b\4\k\a\h\0\j\6\1\4\a\u\u\h\o\s\o\2\n\b\2\q\g\n\a\l\7\n\9\9\z\r\s\b\a\c\4\1\s\f\v\e\j\o\x\q\r\6\e\8\y\c\m\v\x\n\y\b\9\y\n\z\r\l\3\3\r\1\t\g\7\8\z\9\u\b\1\2\n\d\6\p\r\c\k\x\m\h\8\2\4\7\z\m\4\d\5\2\q\a\0\g\p\l\y\k\s\a\o\g\j\d\l\8\p\1\b\f\u\z\d\t\b\3\t\6\j\e\w\w\c\q\b\m\o\0\8\n\e\p\i\k\t\t\y\l\y\5\1\4\b\o\s\u\7\l\4\1\a\9\w\z\v\r\2\z\k\p\b\6\k\t\s\d\g\7\d\6\z\w\q\8 ]] 00:28:43.756 00:28:43.756 real 0m13.984s 00:28:43.756 user 0m10.758s 00:28:43.756 sys 0m2.135s 00:28:43.756 10:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.756 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.756 10:42:37 -- dd/posix.sh@1 -- # cleanup 00:28:43.756 10:42:37 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:43.756 10:42:37 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:43.756 00:28:43.756 real 0m55.571s 00:28:43.756 user 0m41.417s 00:28:43.756 sys 0m7.981s 00:28:43.756 ************************************ 00:28:43.756 END TEST spdk_dd_posix 00:28:43.756 ************************************ 00:28:43.757 10:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.757 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.757 10:42:37 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:43.757 10:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:43.757 10:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.757 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.757 ************************************ 00:28:43.757 START TEST spdk_dd_malloc 00:28:43.757 ************************************ 00:28:43.757 10:42:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:43.757 * Looking for test storage... 00:28:43.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:43.757 10:42:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.757 10:42:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.757 10:42:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.757 10:42:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.757 10:42:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:43.757 10:42:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:43.757 10:42:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:43.757 10:42:37 -- paths/export.sh@5 -- # export PATH 00:28:43.757 10:42:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:43.757 10:42:37 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:43.757 10:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:43.757 10:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.757 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.757 ************************************ 00:28:43.757 START TEST dd_malloc_copy 00:28:43.757 ************************************ 00:28:43.757 10:42:37 -- common/autotest_common.sh@1104 -- # malloc_copy 00:28:43.757 10:42:37 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:43.757 10:42:37 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:43.757 10:42:37 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:28:43.757 10:42:37 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:43.757 10:42:37 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:28:43.757 10:42:37 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:43.757 10:42:37 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:43.757 10:42:37 -- dd/malloc.sh@28 -- # gen_conf 00:28:43.757 10:42:37 -- dd/common.sh@31 -- # xtrace_disable 00:28:43.757 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.757 [2024-07-12 10:42:37.536569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:43.757 [2024-07-12 10:42:37.536866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139380 ] 00:28:43.757 { 00:28:43.757 "subsystems": [ 00:28:43.757 { 00:28:43.757 "subsystem": "bdev", 00:28:43.757 "config": [ 00:28:43.757 { 00:28:43.757 "params": { 00:28:43.757 "num_blocks": 1048576, 00:28:43.757 "block_size": 512, 00:28:43.757 "name": "malloc0" 00:28:43.757 }, 00:28:43.757 "method": "bdev_malloc_create" 00:28:43.757 }, 00:28:43.757 { 00:28:43.757 "params": { 00:28:43.757 "num_blocks": 1048576, 00:28:43.757 "block_size": 512, 00:28:43.757 "name": "malloc1" 00:28:43.757 }, 00:28:43.757 "method": "bdev_malloc_create" 00:28:43.757 }, 00:28:43.757 { 00:28:43.757 "method": "bdev_wait_for_examine" 00:28:43.757 } 00:28:43.757 ] 00:28:43.757 } 00:28:43.757 ] 00:28:43.757 } 00:28:44.016 [2024-07-12 10:42:37.689507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.016 [2024-07-12 10:42:37.872709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.028  Copying: 223/512 [MB] (223 MBps) Copying: 446/512 [MB] (223 MBps) Copying: 512/512 [MB] (average 223 MBps) 00:28:51.028 00:28:51.028 10:42:44 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:51.028 10:42:44 -- dd/malloc.sh@33 -- # gen_conf 00:28:51.028 10:42:44 -- dd/common.sh@31 -- # xtrace_disable 00:28:51.028 10:42:44 -- common/autotest_common.sh@10 -- # set +x 00:28:51.028 [2024-07-12 10:42:44.727935] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:51.028 [2024-07-12 10:42:44.728407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139487 ] 00:28:51.028 { 00:28:51.028 "subsystems": [ 00:28:51.028 { 00:28:51.028 "subsystem": "bdev", 00:28:51.028 "config": [ 00:28:51.028 { 00:28:51.028 "params": { 00:28:51.028 "num_blocks": 1048576, 00:28:51.028 "block_size": 512, 00:28:51.028 "name": "malloc0" 00:28:51.028 }, 00:28:51.028 "method": "bdev_malloc_create" 00:28:51.028 }, 00:28:51.028 { 00:28:51.028 "params": { 00:28:51.028 "num_blocks": 1048576, 00:28:51.028 "block_size": 512, 00:28:51.028 "name": "malloc1" 00:28:51.028 }, 00:28:51.028 "method": "bdev_malloc_create" 00:28:51.028 }, 00:28:51.028 { 00:28:51.028 "method": "bdev_wait_for_examine" 00:28:51.028 } 00:28:51.028 ] 00:28:51.028 } 00:28:51.028 ] 00:28:51.028 } 00:28:51.028 [2024-07-12 10:42:44.896335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.286 [2024-07-12 10:42:45.077320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.047  Copying: 223/512 [MB] (223 MBps) Copying: 447/512 [MB] (224 MBps) Copying: 512/512 [MB] (average 223 MBps) 00:28:58.047 00:28:58.047 ************************************ 00:28:58.047 END TEST dd_malloc_copy 00:28:58.047 ************************************ 00:28:58.047 00:28:58.047 real 0m14.317s 00:28:58.047 user 0m12.826s 00:28:58.047 sys 0m1.379s 00:28:58.047 10:42:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.047 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 ************************************ 00:28:58.047 END TEST spdk_dd_malloc 00:28:58.047 ************************************ 00:28:58.047 00:28:58.047 real 0m14.449s 00:28:58.047 user 0m12.890s 00:28:58.047 sys 0m1.447s 00:28:58.047 10:42:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.047 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 10:42:51 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:58.047 10:42:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:58.047 10:42:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:58.047 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 ************************************ 00:28:58.047 START TEST spdk_dd_bdev_to_bdev 00:28:58.047 ************************************ 00:28:58.047 10:42:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:58.047 * Looking for test storage... 00:28:58.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:58.306 10:42:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.306 10:42:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.306 10:42:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.306 10:42:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.306 10:42:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:58.306 10:42:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:58.306 10:42:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:58.306 10:42:51 -- paths/export.sh@5 -- # export PATH 00:28:58.307 10:42:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:58.307 10:42:51 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:58.307 [2024-07-12 10:42:52.028470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:58.307 [2024-07-12 10:42:52.028931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139652 ] 00:28:58.307 [2024-07-12 10:42:52.196936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.565 [2024-07-12 10:42:52.365601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.064  Copying: 256/256 [MB] (average 1479 MBps) 00:29:00.064 00:29:00.064 10:42:53 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:00.064 10:42:53 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:00.064 10:42:53 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:29:00.064 10:42:53 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:29:00.064 10:42:53 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:00.064 10:42:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:00.064 10:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.064 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:29:00.064 ************************************ 00:29:00.064 START TEST dd_inflate_file 00:29:00.064 ************************************ 00:29:00.064 10:42:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:00.064 [2024-07-12 10:42:53.821589] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:00.065 [2024-07-12 10:42:53.821983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139688 ] 00:29:00.322 [2024-07-12 10:42:53.988528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.322 [2024-07-12 10:42:54.156221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.548  Copying: 64/64 [MB] (average 1488 MBps) 00:29:01.548 00:29:01.548 00:29:01.548 real 0m1.676s 00:29:01.548 user 0m1.246s 00:29:01.548 sys 0m0.260s 00:29:01.548 10:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.548 10:42:55 -- common/autotest_common.sh@10 -- # set +x 00:29:01.548 ************************************ 00:29:01.548 END TEST dd_inflate_file 00:29:01.548 ************************************ 00:29:01.815 10:42:55 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:29:01.815 10:42:55 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:29:01.815 10:42:55 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:01.816 10:42:55 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:29:01.816 10:42:55 -- dd/common.sh@31 -- # xtrace_disable 00:29:01.816 10:42:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:01.816 10:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.816 10:42:55 -- common/autotest_common.sh@10 -- # set +x 00:29:01.816 10:42:55 -- common/autotest_common.sh@10 -- # set +x 00:29:01.816 ************************************ 00:29:01.816 START TEST dd_copy_to_out_bdev 00:29:01.816 ************************************ 00:29:01.816 10:42:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:01.816 { 00:29:01.816 "subsystems": [ 00:29:01.816 { 00:29:01.816 "subsystem": "bdev", 00:29:01.816 "config": [ 00:29:01.816 { 00:29:01.816 "params": { 00:29:01.816 "block_size": 4096, 00:29:01.816 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:01.816 "name": "aio1" 00:29:01.816 }, 00:29:01.816 "method": "bdev_aio_create" 00:29:01.816 }, 00:29:01.816 { 00:29:01.816 "params": { 00:29:01.816 "trtype": "pcie", 00:29:01.816 "traddr": "0000:00:06.0", 00:29:01.816 "name": "Nvme0" 00:29:01.816 }, 00:29:01.816 "method": "bdev_nvme_attach_controller" 00:29:01.816 }, 00:29:01.816 { 00:29:01.816 "method": "bdev_wait_for_examine" 00:29:01.816 } 00:29:01.816 ] 00:29:01.816 } 00:29:01.816 ] 00:29:01.816 } 00:29:01.816 [2024-07-12 10:42:55.559144] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:01.816 [2024-07-12 10:42:55.559548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139739 ] 00:29:02.085 [2024-07-12 10:42:55.732819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.085 [2024-07-12 10:42:55.965381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.960  Copying: 44/64 [MB] (44 MBps) Copying: 64/64 [MB] (average 44 MBps) 00:29:04.960 00:29:04.960 ************************************ 00:29:04.960 END TEST dd_copy_to_out_bdev 00:29:04.960 ************************************ 00:29:04.960 00:29:04.960 real 0m3.254s 00:29:04.960 user 0m2.882s 00:29:04.960 sys 0m0.285s 00:29:04.960 10:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.960 10:42:58 -- common/autotest_common.sh@10 -- # set +x 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:29:04.960 10:42:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.960 10:42:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.960 10:42:58 -- common/autotest_common.sh@10 -- # set +x 00:29:04.960 ************************************ 00:29:04.960 START TEST dd_offset_magic 00:29:04.960 ************************************ 00:29:04.960 10:42:58 -- common/autotest_common.sh@1104 -- # offset_magic 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:29:04.960 10:42:58 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:04.960 10:42:58 -- dd/common.sh@31 -- # xtrace_disable 00:29:04.960 10:42:58 -- common/autotest_common.sh@10 -- # set +x 00:29:04.960 [2024-07-12 10:42:58.865820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:04.960 [2024-07-12 10:42:58.866184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139806 ] 00:29:05.218 { 00:29:05.218 "subsystems": [ 00:29:05.218 { 00:29:05.218 "subsystem": "bdev", 00:29:05.218 "config": [ 00:29:05.218 { 00:29:05.218 "params": { 00:29:05.218 "block_size": 4096, 00:29:05.218 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:05.218 "name": "aio1" 00:29:05.218 }, 00:29:05.218 "method": "bdev_aio_create" 00:29:05.218 }, 00:29:05.218 { 00:29:05.218 "params": { 00:29:05.218 "trtype": "pcie", 00:29:05.218 "traddr": "0000:00:06.0", 00:29:05.218 "name": "Nvme0" 00:29:05.218 }, 00:29:05.218 "method": "bdev_nvme_attach_controller" 00:29:05.218 }, 00:29:05.218 { 00:29:05.218 "method": "bdev_wait_for_examine" 00:29:05.218 } 00:29:05.218 ] 00:29:05.218 } 00:29:05.218 ] 00:29:05.218 } 00:29:05.218 [2024-07-12 10:42:59.033036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.477 [2024-07-12 10:42:59.194725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.671  Copying: 65/65 [MB] (average 1181 MBps) 00:29:06.671 00:29:06.671 10:43:00 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:29:06.671 10:43:00 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:06.671 10:43:00 -- dd/common.sh@31 -- # xtrace_disable 00:29:06.671 10:43:00 -- common/autotest_common.sh@10 -- # set +x 00:29:06.671 [2024-07-12 10:43:00.551054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:06.671 [2024-07-12 10:43:00.551526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139838 ] 00:29:06.671 { 00:29:06.671 "subsystems": [ 00:29:06.671 { 00:29:06.671 "subsystem": "bdev", 00:29:06.671 "config": [ 00:29:06.671 { 00:29:06.671 "params": { 00:29:06.671 "block_size": 4096, 00:29:06.671 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:06.671 "name": "aio1" 00:29:06.671 }, 00:29:06.671 "method": "bdev_aio_create" 00:29:06.671 }, 00:29:06.671 { 00:29:06.671 "params": { 00:29:06.671 "trtype": "pcie", 00:29:06.671 "traddr": "0000:00:06.0", 00:29:06.671 "name": "Nvme0" 00:29:06.671 }, 00:29:06.671 "method": "bdev_nvme_attach_controller" 00:29:06.671 }, 00:29:06.671 { 00:29:06.671 "method": "bdev_wait_for_examine" 00:29:06.671 } 00:29:06.671 ] 00:29:06.671 } 00:29:06.671 ] 00:29:06.671 } 00:29:06.929 [2024-07-12 10:43:00.719612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.188 [2024-07-12 10:43:00.886221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.820  Copying: 1024/1024 [kB] (average 27 MBps) 00:29:08.820 00:29:08.820 10:43:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:08.820 10:43:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:08.820 10:43:02 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:08.820 10:43:02 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:29:08.820 10:43:02 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:08.820 10:43:02 -- dd/common.sh@31 -- # xtrace_disable 00:29:08.820 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:29:08.820 [2024-07-12 10:43:02.381299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:08.820 [2024-07-12 10:43:02.381659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139885 ] 00:29:08.820 { 00:29:08.820 "subsystems": [ 00:29:08.820 { 00:29:08.820 "subsystem": "bdev", 00:29:08.820 "config": [ 00:29:08.820 { 00:29:08.820 "params": { 00:29:08.820 "block_size": 4096, 00:29:08.820 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:08.820 "name": "aio1" 00:29:08.820 }, 00:29:08.820 "method": "bdev_aio_create" 00:29:08.820 }, 00:29:08.820 { 00:29:08.820 "params": { 00:29:08.820 "trtype": "pcie", 00:29:08.820 "traddr": "0000:00:06.0", 00:29:08.820 "name": "Nvme0" 00:29:08.820 }, 00:29:08.820 "method": "bdev_nvme_attach_controller" 00:29:08.820 }, 00:29:08.820 { 00:29:08.820 "method": "bdev_wait_for_examine" 00:29:08.820 } 00:29:08.820 ] 00:29:08.820 } 00:29:08.820 ] 00:29:08.820 } 00:29:08.820 [2024-07-12 10:43:02.548842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.820 [2024-07-12 10:43:02.708629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.321  Copying: 65/65 [MB] (average 1226 MBps) 00:29:10.321 00:29:10.321 10:43:03 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:29:10.321 10:43:03 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:10.321 10:43:03 -- dd/common.sh@31 -- # xtrace_disable 00:29:10.321 10:43:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.321 [2024-07-12 10:43:04.062709] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:10.321 [2024-07-12 10:43:04.063779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139912 ] 00:29:10.321 { 00:29:10.321 "subsystems": [ 00:29:10.321 { 00:29:10.321 "subsystem": "bdev", 00:29:10.321 "config": [ 00:29:10.321 { 00:29:10.321 "params": { 00:29:10.321 "block_size": 4096, 00:29:10.321 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:10.321 "name": "aio1" 00:29:10.321 }, 00:29:10.321 "method": "bdev_aio_create" 00:29:10.321 }, 00:29:10.321 { 00:29:10.321 "params": { 00:29:10.321 "trtype": "pcie", 00:29:10.321 "traddr": "0000:00:06.0", 00:29:10.321 "name": "Nvme0" 00:29:10.321 }, 00:29:10.321 "method": "bdev_nvme_attach_controller" 00:29:10.321 }, 00:29:10.321 { 00:29:10.321 "method": "bdev_wait_for_examine" 00:29:10.321 } 00:29:10.321 ] 00:29:10.321 } 00:29:10.321 ] 00:29:10.321 } 00:29:10.321 [2024-07-12 10:43:04.232684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.579 [2024-07-12 10:43:04.408639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.079  Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:12.079 00:29:12.079 ************************************ 00:29:12.079 END TEST dd_offset_magic 00:29:12.079 ************************************ 00:29:12.079 10:43:05 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:12.079 10:43:05 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:12.079 00:29:12.079 real 0m6.956s 00:29:12.079 user 0m5.527s 00:29:12.079 sys 0m0.954s 00:29:12.079 10:43:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.079 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:29:12.079 10:43:05 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:29:12.079 10:43:05 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:29:12.079 10:43:05 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:12.079 10:43:05 -- dd/common.sh@11 -- # local nvme_ref= 00:29:12.079 10:43:05 -- dd/common.sh@12 -- # local size=4194330 00:29:12.079 10:43:05 -- dd/common.sh@14 -- # local bs=1048576 00:29:12.079 10:43:05 -- dd/common.sh@15 -- # local count=5 00:29:12.079 10:43:05 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:29:12.079 10:43:05 -- dd/common.sh@18 -- # gen_conf 00:29:12.079 10:43:05 -- dd/common.sh@31 -- # xtrace_disable 00:29:12.079 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:29:12.079 [2024-07-12 10:43:05.860304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:12.079 [2024-07-12 10:43:05.860709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139956 ] 00:29:12.079 { 00:29:12.079 "subsystems": [ 00:29:12.079 { 00:29:12.079 "subsystem": "bdev", 00:29:12.079 "config": [ 00:29:12.079 { 00:29:12.079 "params": { 00:29:12.079 "block_size": 4096, 00:29:12.079 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:12.079 "name": "aio1" 00:29:12.079 }, 00:29:12.079 "method": "bdev_aio_create" 00:29:12.079 }, 00:29:12.079 { 00:29:12.079 "params": { 00:29:12.079 "trtype": "pcie", 00:29:12.079 "traddr": "0000:00:06.0", 00:29:12.079 "name": "Nvme0" 00:29:12.079 }, 00:29:12.079 "method": "bdev_nvme_attach_controller" 00:29:12.079 }, 00:29:12.079 { 00:29:12.079 "method": "bdev_wait_for_examine" 00:29:12.079 } 00:29:12.079 ] 00:29:12.079 } 00:29:12.079 ] 00:29:12.079 } 00:29:12.338 [2024-07-12 10:43:06.028528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.338 [2024-07-12 10:43:06.203560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.840  Copying: 5120/5120 [kB] (average 1250 MBps) 00:29:13.840 00:29:13.840 10:43:07 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:29:13.840 10:43:07 -- dd/common.sh@10 -- # local bdev=aio1 00:29:13.840 10:43:07 -- dd/common.sh@11 -- # local nvme_ref= 00:29:13.840 10:43:07 -- dd/common.sh@12 -- # local size=4194330 00:29:13.840 10:43:07 -- dd/common.sh@14 -- # local bs=1048576 00:29:13.840 10:43:07 -- dd/common.sh@15 -- # local count=5 00:29:13.840 10:43:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:29:13.840 10:43:07 -- dd/common.sh@18 -- # gen_conf 00:29:13.840 10:43:07 -- dd/common.sh@31 -- # xtrace_disable 00:29:13.840 10:43:07 -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 [2024-07-12 10:43:07.510210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:13.840 [2024-07-12 10:43:07.510568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139989 ] 00:29:13.840 { 00:29:13.840 "subsystems": [ 00:29:13.840 { 00:29:13.840 "subsystem": "bdev", 00:29:13.840 "config": [ 00:29:13.840 { 00:29:13.840 "params": { 00:29:13.840 "block_size": 4096, 00:29:13.840 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:13.840 "name": "aio1" 00:29:13.840 }, 00:29:13.840 "method": "bdev_aio_create" 00:29:13.840 }, 00:29:13.840 { 00:29:13.840 "params": { 00:29:13.840 "trtype": "pcie", 00:29:13.840 "traddr": "0000:00:06.0", 00:29:13.840 "name": "Nvme0" 00:29:13.840 }, 00:29:13.840 "method": "bdev_nvme_attach_controller" 00:29:13.840 }, 00:29:13.840 { 00:29:13.840 "method": "bdev_wait_for_examine" 00:29:13.840 } 00:29:13.840 ] 00:29:13.840 } 00:29:13.840 ] 00:29:13.840 } 00:29:13.840 [2024-07-12 10:43:07.676713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.099 [2024-07-12 10:43:07.838569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.291  Copying: 5120/5120 [kB] (average 1250 MBps) 00:29:15.291 00:29:15.291 10:43:09 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:15.550 ************************************ 00:29:15.550 END TEST spdk_dd_bdev_to_bdev 00:29:15.550 ************************************ 00:29:15.550 00:29:15.550 real 0m17.341s 00:29:15.550 user 0m13.732s 00:29:15.550 sys 0m2.499s 00:29:15.550 10:43:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.550 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:29:15.550 10:43:09 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:15.550 10:43:09 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:15.550 10:43:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:15.550 10:43:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:15.551 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:29:15.551 ************************************ 00:29:15.551 START TEST spdk_dd_sparse 00:29:15.551 ************************************ 00:29:15.551 10:43:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:15.551 * Looking for test storage... 00:29:15.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:15.551 10:43:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:15.551 10:43:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.551 10:43:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.551 10:43:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.551 10:43:09 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:15.551 10:43:09 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:15.551 10:43:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:15.551 10:43:09 -- paths/export.sh@5 -- # export PATH 00:29:15.551 10:43:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:15.551 10:43:09 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:15.551 10:43:09 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:15.551 10:43:09 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:15.551 10:43:09 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:15.551 10:43:09 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:15.551 10:43:09 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:15.551 10:43:09 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:15.551 10:43:09 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:15.551 10:43:09 -- dd/sparse.sh@118 -- # prepare 00:29:15.551 10:43:09 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:15.551 10:43:09 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:15.551 1+0 records in 00:29:15.551 1+0 records out 00:29:15.551 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00760194 s, 552 MB/s 00:29:15.551 10:43:09 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:15.551 1+0 records in 00:29:15.551 1+0 records out 00:29:15.551 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00885695 s, 474 MB/s 00:29:15.551 10:43:09 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:15.551 1+0 records in 00:29:15.551 1+0 records out 00:29:15.551 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00497837 s, 843 MB/s 00:29:15.551 10:43:09 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:15.551 10:43:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:15.551 10:43:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:15.551 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:29:15.551 ************************************ 00:29:15.551 START TEST dd_sparse_file_to_file 00:29:15.551 ************************************ 00:29:15.551 10:43:09 -- common/autotest_common.sh@1104 -- # file_to_file 00:29:15.551 10:43:09 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:15.551 10:43:09 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:15.551 10:43:09 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:15.551 10:43:09 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:15.551 10:43:09 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:29:15.551 10:43:09 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:15.551 10:43:09 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:15.551 10:43:09 -- dd/sparse.sh@41 -- # gen_conf 00:29:15.551 10:43:09 -- dd/common.sh@31 -- # xtrace_disable 00:29:15.551 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:29:15.551 { 00:29:15.551 "subsystems": [ 00:29:15.551 { 00:29:15.551 "subsystem": "bdev", 00:29:15.551 "config": [ 00:29:15.551 { 00:29:15.551 "params": { 00:29:15.551 "block_size": 4096, 00:29:15.551 "filename": "dd_sparse_aio_disk", 00:29:15.551 "name": "dd_aio" 00:29:15.551 }, 00:29:15.551 "method": "bdev_aio_create" 00:29:15.551 }, 00:29:15.551 { 00:29:15.551 "params": { 00:29:15.551 "lvs_name": "dd_lvstore", 00:29:15.551 "bdev_name": "dd_aio" 00:29:15.551 }, 00:29:15.551 "method": "bdev_lvol_create_lvstore" 00:29:15.551 }, 00:29:15.551 { 00:29:15.551 "method": "bdev_wait_for_examine" 00:29:15.551 } 00:29:15.551 ] 00:29:15.551 } 00:29:15.551 ] 00:29:15.551 } 00:29:15.809 [2024-07-12 10:43:09.468186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:15.809 [2024-07-12 10:43:09.468535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140073 ] 00:29:15.809 [2024-07-12 10:43:09.637530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.068 [2024-07-12 10:43:09.796648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.260  Copying: 12/36 [MB] (average 1090 MBps) 00:29:17.260 00:29:17.260 10:43:11 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:17.260 10:43:11 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:17.260 10:43:11 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:17.260 10:43:11 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:17.260 10:43:11 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:17.260 10:43:11 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:17.519 10:43:11 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:17.519 10:43:11 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:17.519 ************************************ 00:29:17.519 END TEST dd_sparse_file_to_file 00:29:17.519 ************************************ 00:29:17.519 10:43:11 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:17.519 10:43:11 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:17.519 00:29:17.519 real 0m1.775s 00:29:17.519 user 0m1.398s 00:29:17.519 sys 0m0.244s 00:29:17.519 10:43:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.519 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:29:17.519 10:43:11 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:17.519 10:43:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:17.519 10:43:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.519 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:29:17.519 ************************************ 00:29:17.519 START TEST dd_sparse_file_to_bdev 00:29:17.519 ************************************ 00:29:17.519 10:43:11 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:29:17.519 10:43:11 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:17.519 10:43:11 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:17.519 10:43:11 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:29:17.519 10:43:11 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:17.519 10:43:11 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:17.519 10:43:11 -- dd/sparse.sh@73 -- # gen_conf 00:29:17.519 10:43:11 -- dd/common.sh@31 -- # xtrace_disable 00:29:17.519 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:29:17.519 [2024-07-12 10:43:11.291040] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:17.519 [2024-07-12 10:43:11.291380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140132 ] 00:29:17.519 { 00:29:17.519 "subsystems": [ 00:29:17.519 { 00:29:17.519 "subsystem": "bdev", 00:29:17.519 "config": [ 00:29:17.519 { 00:29:17.519 "params": { 00:29:17.519 "block_size": 4096, 00:29:17.519 "filename": "dd_sparse_aio_disk", 00:29:17.519 "name": "dd_aio" 00:29:17.519 }, 00:29:17.519 "method": "bdev_aio_create" 00:29:17.519 }, 00:29:17.519 { 00:29:17.519 "params": { 00:29:17.519 "lvs_name": "dd_lvstore", 00:29:17.519 "thin_provision": true, 00:29:17.519 "lvol_name": "dd_lvol", 00:29:17.519 "size": 37748736 00:29:17.519 }, 00:29:17.519 "method": "bdev_lvol_create" 00:29:17.519 }, 00:29:17.519 { 00:29:17.519 "method": "bdev_wait_for_examine" 00:29:17.519 } 00:29:17.519 ] 00:29:17.519 } 00:29:17.519 ] 00:29:17.519 } 00:29:17.778 [2024-07-12 10:43:11.456518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.778 [2024-07-12 10:43:11.627520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.036 [2024-07-12 10:43:11.890617] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:18.036  Copying: 12/36 [MB] (average 521 MBps)[2024-07-12 10:43:11.947125] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:19.411 00:29:19.411 00:29:19.411 ************************************ 00:29:19.411 END TEST dd_sparse_file_to_bdev 00:29:19.411 ************************************ 00:29:19.411 00:29:19.411 real 0m1.764s 00:29:19.411 user 0m1.454s 00:29:19.411 sys 0m0.204s 00:29:19.411 10:43:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.411 10:43:12 -- common/autotest_common.sh@10 -- # set +x 00:29:19.411 10:43:13 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:19.411 10:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:19.411 10:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.411 10:43:13 -- common/autotest_common.sh@10 -- # set +x 00:29:19.411 ************************************ 00:29:19.411 START TEST dd_sparse_bdev_to_file 00:29:19.411 ************************************ 00:29:19.411 10:43:13 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:29:19.411 10:43:13 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:19.411 10:43:13 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:19.411 10:43:13 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:19.411 10:43:13 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:19.411 10:43:13 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:19.411 10:43:13 -- dd/sparse.sh@91 -- # gen_conf 00:29:19.411 10:43:13 -- dd/common.sh@31 -- # xtrace_disable 00:29:19.411 10:43:13 -- common/autotest_common.sh@10 -- # set +x 00:29:19.411 [2024-07-12 10:43:13.110856] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:19.411 [2024-07-12 10:43:13.111226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140203 ] 00:29:19.411 { 00:29:19.411 "subsystems": [ 00:29:19.411 { 00:29:19.411 "subsystem": "bdev", 00:29:19.411 "config": [ 00:29:19.411 { 00:29:19.411 "params": { 00:29:19.411 "block_size": 4096, 00:29:19.411 "filename": "dd_sparse_aio_disk", 00:29:19.411 "name": "dd_aio" 00:29:19.411 }, 00:29:19.411 "method": "bdev_aio_create" 00:29:19.411 }, 00:29:19.411 { 00:29:19.411 "method": "bdev_wait_for_examine" 00:29:19.411 } 00:29:19.411 ] 00:29:19.411 } 00:29:19.411 ] 00:29:19.411 } 00:29:19.411 [2024-07-12 10:43:13.277164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.669 [2024-07-12 10:43:13.447052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.859  Copying: 12/36 [MB] (average 1090 MBps) 00:29:20.859 00:29:21.117 10:43:14 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:21.117 10:43:14 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:21.117 10:43:14 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:21.117 10:43:14 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:21.117 10:43:14 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:21.117 10:43:14 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:21.117 10:43:14 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:21.117 10:43:14 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:21.117 ************************************ 00:29:21.117 END TEST dd_sparse_bdev_to_file 00:29:21.117 ************************************ 00:29:21.117 10:43:14 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:21.117 10:43:14 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:21.117 00:29:21.117 real 0m1.755s 00:29:21.117 user 0m1.414s 00:29:21.117 sys 0m0.241s 00:29:21.117 10:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.117 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.117 10:43:14 -- dd/sparse.sh@1 -- # cleanup 00:29:21.117 10:43:14 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:21.117 10:43:14 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:21.117 10:43:14 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:21.117 10:43:14 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:21.117 00:29:21.117 real 0m5.586s 00:29:21.117 user 0m4.401s 00:29:21.117 sys 0m0.830s 00:29:21.117 10:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.117 ************************************ 00:29:21.117 END TEST spdk_dd_sparse 00:29:21.117 ************************************ 00:29:21.117 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.117 10:43:14 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:21.117 10:43:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.117 10:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.117 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.117 ************************************ 00:29:21.117 START TEST spdk_dd_negative 00:29:21.117 ************************************ 00:29:21.117 10:43:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:21.117 * Looking for test storage... 00:29:21.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:21.117 10:43:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.117 10:43:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.117 10:43:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.117 10:43:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.117 10:43:14 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.117 10:43:14 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.117 10:43:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.117 10:43:14 -- paths/export.sh@5 -- # export PATH 00:29:21.117 10:43:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.117 10:43:14 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.117 10:43:14 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:21.117 10:43:14 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.117 10:43:14 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:21.117 10:43:14 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:21.117 10:43:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.117 10:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.117 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.117 ************************************ 00:29:21.117 START TEST dd_invalid_arguments 00:29:21.117 ************************************ 00:29:21.117 10:43:15 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:21.117 10:43:15 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:21.117 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.117 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:21.117 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.117 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.117 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.117 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.117 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.117 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.117 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.117 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:21.117 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:21.375 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:21.375 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:21.375 options: 00:29:21.375 -c, --config JSON config file (default none) 00:29:21.375 --json JSON config file (default none) 00:29:21.375 --json-ignore-init-errors 00:29:21.375 don't exit on invalid config entry 00:29:21.375 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:21.375 -g, --single-file-segments 00:29:21.375 force creating just one hugetlbfs file 00:29:21.375 -h, --help show this usage 00:29:21.375 -i, --shm-id shared memory ID (optional) 00:29:21.375 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:21.376 --lcores lcore to CPU mapping list. The list is in the format: 00:29:21.376 [<,lcores[@CPUs]>...] 00:29:21.376 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:21.376 Within the group, '-' is used for range separator, 00:29:21.376 ',' is used for single number separator. 00:29:21.376 '( )' can be omitted for single element group, 00:29:21.376 '@' can be omitted if cpus and lcores have the same value 00:29:21.376 -n, --mem-channels channel number of memory channels used for DPDK 00:29:21.376 -p, --main-core main (primary) core for DPDK 00:29:21.376 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:21.376 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:21.376 --disable-cpumask-locks Disable CPU core lock files. 00:29:21.376 --silence-noticelog disable notice level logging to stderr 00:29:21.376 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:21.376 -u, --no-pci disable PCI access 00:29:21.376 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:21.376 --max-delay maximum reactor delay (in microseconds) 00:29:21.376 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:21.376 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:21.376 -R, --huge-unlink unlink huge files after initialization 00:29:21.376 -v, --version print SPDK version 00:29:21.376 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:21.376 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:21.376 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:21.376 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:21.376 Tracepoints vary in size and can use more than one trace entry. 00:29:21.376 --rpcs-allowed comma-separated list of permitted RPCS 00:29:21.376 --env-context Opaque context for use of the env implementation 00:29:21.376 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:21.376 --no-huge run without using hugepages 00:29:21.376 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:21.376 -e, --tpoint-group [:] 00:29:21.376 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:21.376 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:21.376 Groups and [2024-07-12 10:43:15.071893] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:21.376 masks can be combined (e.g. thread,bdev:0x1). 00:29:21.376 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:21.376 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:21.376 [--------- DD Options ---------] 00:29:21.376 --if Input file. Must specify either --if or --ib. 00:29:21.376 --ib Input bdev. Must specifier either --if or --ib 00:29:21.376 --of Output file. Must specify either --of or --ob. 00:29:21.376 --ob Output bdev. Must specify either --of or --ob. 00:29:21.376 --iflag Input file flags. 00:29:21.376 --oflag Output file flags. 00:29:21.376 --bs I/O unit size (default: 4096) 00:29:21.376 --qd Queue depth (default: 2) 00:29:21.376 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:21.376 --skip Skip this many I/O units at start of input. (default: 0) 00:29:21.376 --seek Skip this many I/O units at start of output. (default: 0) 00:29:21.376 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:21.376 --sparse Enable hole skipping in input target 00:29:21.376 Available iflag and oflag values: 00:29:21.376 append - append mode 00:29:21.376 direct - use direct I/O for data 00:29:21.376 directory - fail unless a directory 00:29:21.376 dsync - use synchronized I/O for data 00:29:21.376 noatime - do not update access time 00:29:21.376 noctty - do not assign controlling terminal from file 00:29:21.376 nofollow - do not follow symlinks 00:29:21.376 nonblock - use non-blocking I/O 00:29:21.376 sync - use synchronized I/O for data and metadata 00:29:21.376 10:43:15 -- common/autotest_common.sh@643 -- # es=2 00:29:21.376 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.376 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.376 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.376 00:29:21.376 real 0m0.113s 00:29:21.376 user 0m0.045s 00:29:21.376 sys 0m0.063s 00:29:21.376 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.376 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.376 ************************************ 00:29:21.376 END TEST dd_invalid_arguments 00:29:21.376 ************************************ 00:29:21.376 10:43:15 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:21.376 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.376 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.376 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.376 ************************************ 00:29:21.376 START TEST dd_double_input 00:29:21.376 ************************************ 00:29:21.376 10:43:15 -- common/autotest_common.sh@1104 -- # double_input 00:29:21.376 10:43:15 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:21.376 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.376 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:21.376 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.376 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.376 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.376 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.376 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.376 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.376 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.376 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:21.376 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:21.376 [2024-07-12 10:43:15.231417] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:21.376 ************************************ 00:29:21.376 END TEST dd_double_input 00:29:21.376 ************************************ 00:29:21.376 10:43:15 -- common/autotest_common.sh@643 -- # es=22 00:29:21.376 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.376 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.376 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.376 00:29:21.376 real 0m0.107s 00:29:21.376 user 0m0.056s 00:29:21.376 sys 0m0.052s 00:29:21.376 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.376 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.635 10:43:15 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:21.635 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.635 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.635 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.635 ************************************ 00:29:21.635 START TEST dd_double_output 00:29:21.635 ************************************ 00:29:21.635 10:43:15 -- common/autotest_common.sh@1104 -- # double_output 00:29:21.635 10:43:15 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:21.635 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.635 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:21.635 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:21.635 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:21.635 [2024-07-12 10:43:15.387864] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:21.635 10:43:15 -- common/autotest_common.sh@643 -- # es=22 00:29:21.635 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.635 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.635 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.635 00:29:21.635 real 0m0.112s 00:29:21.635 user 0m0.056s 00:29:21.635 sys 0m0.056s 00:29:21.635 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.635 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.635 ************************************ 00:29:21.635 END TEST dd_double_output 00:29:21.635 ************************************ 00:29:21.635 10:43:15 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:21.635 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.635 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.635 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.635 ************************************ 00:29:21.635 START TEST dd_no_input 00:29:21.635 ************************************ 00:29:21.635 10:43:15 -- common/autotest_common.sh@1104 -- # no_input 00:29:21.635 10:43:15 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:21.635 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.635 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:21.635 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.635 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:21.635 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:21.892 [2024-07-12 10:43:15.553033] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:21.892 10:43:15 -- common/autotest_common.sh@643 -- # es=22 00:29:21.892 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.892 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.892 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.892 ************************************ 00:29:21.892 END TEST dd_no_input 00:29:21.892 ************************************ 00:29:21.892 00:29:21.892 real 0m0.106s 00:29:21.892 user 0m0.053s 00:29:21.892 sys 0m0.053s 00:29:21.892 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.892 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.892 10:43:15 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:21.892 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.892 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.892 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.892 ************************************ 00:29:21.893 START TEST dd_no_output 00:29:21.893 ************************************ 00:29:21.893 10:43:15 -- common/autotest_common.sh@1104 -- # no_output 00:29:21.893 10:43:15 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.893 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.893 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.893 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.893 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.893 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.893 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.893 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.893 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.893 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:21.893 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:21.893 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:21.893 [2024-07-12 10:43:15.714040] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:21.893 10:43:15 -- common/autotest_common.sh@643 -- # es=22 00:29:21.893 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.893 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.893 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.893 00:29:21.893 real 0m0.107s 00:29:21.893 user 0m0.055s 00:29:21.893 sys 0m0.052s 00:29:21.893 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.893 ************************************ 00:29:21.893 END TEST dd_no_output 00:29:21.893 ************************************ 00:29:21.893 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:21.893 10:43:15 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:21.893 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.893 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.893 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:22.151 ************************************ 00:29:22.151 START TEST dd_wrong_blocksize 00:29:22.151 ************************************ 00:29:22.151 10:43:15 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:22.151 10:43:15 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:22.151 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.151 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:22.151 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:22.151 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:22.151 [2024-07-12 10:43:15.872755] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:22.151 10:43:15 -- common/autotest_common.sh@643 -- # es=22 00:29:22.151 10:43:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.151 ************************************ 00:29:22.151 END TEST dd_wrong_blocksize 00:29:22.151 ************************************ 00:29:22.151 10:43:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:22.151 10:43:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.151 00:29:22.151 real 0m0.110s 00:29:22.151 user 0m0.063s 00:29:22.151 sys 0m0.047s 00:29:22.151 10:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.151 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:22.151 10:43:15 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:22.151 10:43:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:22.151 10:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.151 10:43:15 -- common/autotest_common.sh@10 -- # set +x 00:29:22.151 ************************************ 00:29:22.151 START TEST dd_smaller_blocksize 00:29:22.151 ************************************ 00:29:22.151 10:43:15 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:22.151 10:43:15 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:22.151 10:43:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.151 10:43:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:22.151 10:43:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:22.151 10:43:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:22.151 10:43:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:22.151 [2024-07-12 10:43:16.041889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:22.151 [2024-07-12 10:43:16.042084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140471 ] 00:29:22.409 [2024-07-12 10:43:16.212228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.666 [2024-07-12 10:43:16.456346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.232 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:23.232 [2024-07-12 10:43:16.988723] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:23.232 [2024-07-12 10:43:16.988836] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:23.798 [2024-07-12 10:43:17.569859] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:24.057 10:43:17 -- common/autotest_common.sh@643 -- # es=244 00:29:24.057 10:43:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.057 10:43:17 -- common/autotest_common.sh@652 -- # es=116 00:29:24.057 10:43:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:24.057 10:43:17 -- common/autotest_common.sh@660 -- # es=1 00:29:24.057 10:43:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.057 00:29:24.057 real 0m1.924s 00:29:24.057 user 0m1.377s 00:29:24.057 sys 0m0.446s 00:29:24.057 10:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.057 10:43:17 -- common/autotest_common.sh@10 -- # set +x 00:29:24.057 ************************************ 00:29:24.057 END TEST dd_smaller_blocksize 00:29:24.057 ************************************ 00:29:24.057 10:43:17 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:24.057 10:43:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.057 10:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.057 10:43:17 -- common/autotest_common.sh@10 -- # set +x 00:29:24.057 ************************************ 00:29:24.057 START TEST dd_invalid_count 00:29:24.057 ************************************ 00:29:24.057 10:43:17 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:24.057 10:43:17 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:24.057 10:43:17 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.057 10:43:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:24.057 10:43:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.057 10:43:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.057 10:43:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.057 10:43:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.057 10:43:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.057 10:43:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.057 10:43:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.057 10:43:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.057 10:43:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:24.316 [2024-07-12 10:43:18.008258] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:24.316 10:43:18 -- common/autotest_common.sh@643 -- # es=22 00:29:24.316 10:43:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.316 10:43:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.316 10:43:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.316 00:29:24.316 real 0m0.108s 00:29:24.316 user 0m0.048s 00:29:24.316 sys 0m0.058s 00:29:24.316 10:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.316 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.316 ************************************ 00:29:24.316 END TEST dd_invalid_count 00:29:24.316 ************************************ 00:29:24.316 10:43:18 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:24.316 10:43:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.316 10:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.316 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.316 ************************************ 00:29:24.316 START TEST dd_invalid_oflag 00:29:24.316 ************************************ 00:29:24.316 10:43:18 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:24.316 10:43:18 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:24.316 10:43:18 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.316 10:43:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:24.316 10:43:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.316 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.316 10:43:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.316 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.316 10:43:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.316 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.316 10:43:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.316 10:43:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.316 10:43:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:24.316 [2024-07-12 10:43:18.167008] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:24.316 10:43:18 -- common/autotest_common.sh@643 -- # es=22 00:29:24.316 10:43:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.316 10:43:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.316 10:43:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.316 00:29:24.316 real 0m0.102s 00:29:24.316 user 0m0.060s 00:29:24.316 sys 0m0.040s 00:29:24.316 10:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.316 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.316 ************************************ 00:29:24.316 END TEST dd_invalid_oflag 00:29:24.316 ************************************ 00:29:24.575 10:43:18 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:24.575 10:43:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.575 10:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.575 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.575 ************************************ 00:29:24.575 START TEST dd_invalid_iflag 00:29:24.575 ************************************ 00:29:24.575 10:43:18 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:24.575 10:43:18 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:24.575 10:43:18 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.575 10:43:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:24.575 10:43:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.575 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.575 10:43:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.575 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.575 10:43:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.575 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.575 10:43:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.575 10:43:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.575 10:43:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:24.575 [2024-07-12 10:43:18.338297] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:24.575 10:43:18 -- common/autotest_common.sh@643 -- # es=22 00:29:24.575 10:43:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.575 10:43:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.575 10:43:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.575 00:29:24.575 real 0m0.114s 00:29:24.575 user 0m0.061s 00:29:24.575 sys 0m0.052s 00:29:24.575 10:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.575 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.575 ************************************ 00:29:24.576 END TEST dd_invalid_iflag 00:29:24.576 ************************************ 00:29:24.576 10:43:18 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:24.576 10:43:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.576 10:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.576 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.576 ************************************ 00:29:24.576 START TEST dd_unknown_flag 00:29:24.576 ************************************ 00:29:24.576 10:43:18 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:24.576 10:43:18 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:24.576 10:43:18 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.576 10:43:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:24.576 10:43:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.576 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.576 10:43:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.576 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.576 10:43:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.576 10:43:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.576 10:43:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.576 10:43:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.576 10:43:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:24.834 [2024-07-12 10:43:18.510769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:24.834 [2024-07-12 10:43:18.511156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140592 ] 00:29:24.834 [2024-07-12 10:43:18.681310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.093 [2024-07-12 10:43:18.850886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.352 [2024-07-12 10:43:19.105914] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:25.352 [2024-07-12 10:43:19.106178] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:25.352 [2024-07-12 10:43:19.106311] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:25.352 [2024-07-12 10:43:19.106458] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:25.918 [2024-07-12 10:43:19.683772] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:26.177 ************************************ 00:29:26.177 END TEST dd_unknown_flag 00:29:26.177 ************************************ 00:29:26.177 10:43:20 -- common/autotest_common.sh@643 -- # es=234 00:29:26.177 10:43:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:26.177 10:43:20 -- common/autotest_common.sh@652 -- # es=106 00:29:26.177 10:43:20 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:26.177 10:43:20 -- common/autotest_common.sh@660 -- # es=1 00:29:26.177 10:43:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:26.177 00:29:26.177 real 0m1.570s 00:29:26.177 user 0m1.213s 00:29:26.177 sys 0m0.255s 00:29:26.177 10:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.177 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:26.177 10:43:20 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:26.177 10:43:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:26.177 10:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.177 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:26.177 ************************************ 00:29:26.177 START TEST dd_invalid_json 00:29:26.177 ************************************ 00:29:26.177 10:43:20 -- common/autotest_common.sh@1104 -- # invalid_json 00:29:26.177 10:43:20 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:26.177 10:43:20 -- dd/negative_dd.sh@95 -- # : 00:29:26.177 10:43:20 -- common/autotest_common.sh@640 -- # local es=0 00:29:26.177 10:43:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:26.177 10:43:20 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.177 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.177 10:43:20 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.177 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.177 10:43:20 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.177 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.177 10:43:20 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.177 10:43:20 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:26.177 10:43:20 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:26.436 [2024-07-12 10:43:20.134042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:26.436 [2024-07-12 10:43:20.135051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140640 ] 00:29:26.436 [2024-07-12 10:43:20.304526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.695 [2024-07-12 10:43:20.473772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.695 [2024-07-12 10:43:20.474145] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:26.695 [2024-07-12 10:43:20.474316] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:26.695 [2024-07-12 10:43:20.474413] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:26.954 ************************************ 00:29:26.954 END TEST dd_invalid_json 00:29:26.954 ************************************ 00:29:26.954 10:43:20 -- common/autotest_common.sh@643 -- # es=234 00:29:26.954 10:43:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:26.954 10:43:20 -- common/autotest_common.sh@652 -- # es=106 00:29:26.954 10:43:20 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:26.954 10:43:20 -- common/autotest_common.sh@660 -- # es=1 00:29:26.954 10:43:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:26.954 00:29:26.954 real 0m0.727s 00:29:26.954 user 0m0.518s 00:29:26.954 sys 0m0.107s 00:29:26.954 10:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.954 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:26.954 00:29:26.954 real 0m5.924s 00:29:26.954 user 0m3.993s 00:29:26.954 sys 0m1.542s 00:29:26.954 10:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.954 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:26.954 ************************************ 00:29:26.954 END TEST spdk_dd_negative 00:29:26.954 ************************************ 00:29:27.212 00:29:27.212 real 2m19.966s 00:29:27.212 user 1m48.938s 00:29:27.212 sys 0m21.334s 00:29:27.212 10:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.213 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:27.213 ************************************ 00:29:27.213 END TEST spdk_dd 00:29:27.213 ************************************ 00:29:27.213 10:43:20 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:29:27.213 10:43:20 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:27.213 10:43:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:27.213 10:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.213 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:29:27.213 ************************************ 00:29:27.213 START TEST blockdev_nvme 00:29:27.213 ************************************ 00:29:27.213 10:43:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:27.213 * Looking for test storage... 00:29:27.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:27.213 10:43:20 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:27.213 10:43:20 -- bdev/nbd_common.sh@6 -- # set -e 00:29:27.213 10:43:20 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:27.213 10:43:20 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:27.213 10:43:20 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:27.213 10:43:20 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:27.213 10:43:20 -- bdev/blockdev.sh@18 -- # : 00:29:27.213 10:43:20 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:27.213 10:43:20 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:27.213 10:43:20 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:27.213 10:43:20 -- bdev/blockdev.sh@672 -- # uname -s 00:29:27.213 10:43:20 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:27.213 10:43:20 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:27.213 10:43:20 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:27.213 10:43:20 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:27.213 10:43:20 -- bdev/blockdev.sh@682 -- # dek= 00:29:27.213 10:43:21 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:27.213 10:43:21 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:27.213 10:43:21 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:27.213 10:43:21 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:27.213 10:43:21 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:27.213 10:43:21 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:27.213 10:43:21 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140734 00:29:27.213 10:43:21 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:27.213 10:43:21 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:27.213 10:43:21 -- bdev/blockdev.sh@47 -- # waitforlisten 140734 00:29:27.213 10:43:21 -- common/autotest_common.sh@819 -- # '[' -z 140734 ']' 00:29:27.213 10:43:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.213 10:43:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:27.213 10:43:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.213 10:43:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:27.213 10:43:21 -- common/autotest_common.sh@10 -- # set +x 00:29:27.213 [2024-07-12 10:43:21.078647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:27.213 [2024-07-12 10:43:21.079131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140734 ] 00:29:27.471 [2024-07-12 10:43:21.236802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.730 [2024-07-12 10:43:21.396842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:27.730 [2024-07-12 10:43:21.397340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.157 10:43:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.157 10:43:22 -- common/autotest_common.sh@852 -- # return 0 00:29:29.157 10:43:22 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:29.157 10:43:22 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:29.157 10:43:22 -- bdev/blockdev.sh@79 -- # local json 00:29:29.157 10:43:22 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:29.157 10:43:22 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:29.157 10:43:22 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.157 10:43:22 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.157 10:43:22 -- bdev/blockdev.sh@738 -- # cat 00:29:29.157 10:43:22 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.157 10:43:22 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.157 10:43:22 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.157 10:43:22 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:29.157 10:43:22 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:29.157 10:43:22 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:29.157 10:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.157 10:43:22 -- common/autotest_common.sh@10 -- # set +x 00:29:29.157 10:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.158 10:43:22 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:29.158 10:43:22 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:29.158 10:43:22 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f11efcb3-482a-4ef9-b281-6addbb55771a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f11efcb3-482a-4ef9-b281-6addbb55771a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:29.158 10:43:22 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:29.158 10:43:22 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:29.158 10:43:22 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:29.158 10:43:22 -- bdev/blockdev.sh@752 -- # killprocess 140734 00:29:29.158 10:43:22 -- common/autotest_common.sh@926 -- # '[' -z 140734 ']' 00:29:29.158 10:43:22 -- common/autotest_common.sh@930 -- # kill -0 140734 00:29:29.158 10:43:22 -- common/autotest_common.sh@931 -- # uname 00:29:29.158 10:43:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:29.158 10:43:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140734 00:29:29.158 10:43:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:29.158 10:43:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:29.158 10:43:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140734' 00:29:29.158 killing process with pid 140734 00:29:29.158 10:43:23 -- common/autotest_common.sh@945 -- # kill 140734 00:29:29.158 10:43:23 -- common/autotest_common.sh@950 -- # wait 140734 00:29:31.084 10:43:24 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:31.084 10:43:24 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:31.084 10:43:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:31.084 10:43:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:31.084 10:43:24 -- common/autotest_common.sh@10 -- # set +x 00:29:31.084 ************************************ 00:29:31.084 START TEST bdev_hello_world 00:29:31.084 ************************************ 00:29:31.084 10:43:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:31.084 [2024-07-12 10:43:24.817420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:31.084 [2024-07-12 10:43:24.817840] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140851 ] 00:29:31.084 [2024-07-12 10:43:24.973253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.341 [2024-07-12 10:43:25.133344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.905 [2024-07-12 10:43:25.510666] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:31.906 [2024-07-12 10:43:25.510909] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:31.906 [2024-07-12 10:43:25.510990] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:31.906 [2024-07-12 10:43:25.513695] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:31.906 [2024-07-12 10:43:25.514229] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:31.906 [2024-07-12 10:43:25.514382] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:31.906 [2024-07-12 10:43:25.514648] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:31.906 00:29:31.906 [2024-07-12 10:43:25.514812] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:32.470 ************************************ 00:29:32.470 END TEST bdev_hello_world 00:29:32.470 ************************************ 00:29:32.470 00:29:32.470 real 0m1.594s 00:29:32.470 user 0m1.266s 00:29:32.470 sys 0m0.228s 00:29:32.470 10:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.470 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 10:43:26 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:32.727 10:43:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:32.727 10:43:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:32.727 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 ************************************ 00:29:32.727 START TEST bdev_bounds 00:29:32.727 ************************************ 00:29:32.727 10:43:26 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:32.727 10:43:26 -- bdev/blockdev.sh@288 -- # bdevio_pid=140889 00:29:32.727 10:43:26 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:32.727 10:43:26 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:32.727 10:43:26 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 140889' 00:29:32.727 Process bdevio pid: 140889 00:29:32.727 10:43:26 -- bdev/blockdev.sh@291 -- # waitforlisten 140889 00:29:32.727 10:43:26 -- common/autotest_common.sh@819 -- # '[' -z 140889 ']' 00:29:32.727 10:43:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.727 10:43:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:32.727 10:43:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.727 10:43:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:32.727 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 [2024-07-12 10:43:26.472900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:32.727 [2024-07-12 10:43:26.473284] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140889 ] 00:29:32.985 [2024-07-12 10:43:26.650110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.985 [2024-07-12 10:43:26.819958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.985 [2024-07-12 10:43:26.820098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.985 [2024-07-12 10:43:26.820094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.547 10:43:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:33.547 10:43:27 -- common/autotest_common.sh@852 -- # return 0 00:29:33.547 10:43:27 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:33.806 I/O targets: 00:29:33.806 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:33.806 00:29:33.806 00:29:33.806 CUnit - A unit testing framework for C - Version 2.1-3 00:29:33.806 http://cunit.sourceforge.net/ 00:29:33.806 00:29:33.806 00:29:33.806 Suite: bdevio tests on: Nvme0n1 00:29:33.806 Test: blockdev write read block ...passed 00:29:33.806 Test: blockdev write zeroes read block ...passed 00:29:33.806 Test: blockdev write zeroes read no split ...passed 00:29:33.806 Test: blockdev write zeroes read split ...passed 00:29:33.806 Test: blockdev write zeroes read split partial ...passed 00:29:33.806 Test: blockdev reset ...[2024-07-12 10:43:27.570913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:33.806 [2024-07-12 10:43:27.574389] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.806 passed 00:29:33.806 Test: blockdev write read 8 blocks ...passed 00:29:33.806 Test: blockdev write read size > 128k ...passed 00:29:33.806 Test: blockdev write read invalid size ...passed 00:29:33.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:33.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:33.806 Test: blockdev write read max offset ...passed 00:29:33.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:33.806 Test: blockdev writev readv 8 blocks ...passed 00:29:33.806 Test: blockdev writev readv 30 x 1block ...passed 00:29:33.806 Test: blockdev writev readv block ...passed 00:29:33.806 Test: blockdev writev readv size > 128k ...passed 00:29:33.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:33.806 Test: blockdev comparev and writev ...[2024-07-12 10:43:27.584293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f80d000 len:0x1000 00:29:33.806 [2024-07-12 10:43:27.584520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:33.806 passed 00:29:33.806 Test: blockdev nvme passthru rw ...passed 00:29:33.806 Test: blockdev nvme passthru vendor specific ...[2024-07-12 10:43:27.585707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:33.806 [2024-07-12 10:43:27.585855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:33.806 passed 00:29:33.806 Test: blockdev nvme admin passthru ...passed 00:29:33.806 Test: blockdev copy ...passed 00:29:33.806 00:29:33.806 Run Summary: Type Total Ran Passed Failed Inactive 00:29:33.806 suites 1 1 n/a 0 0 00:29:33.806 tests 23 23 23 0 0 00:29:33.806 asserts 152 152 152 0 n/a 00:29:33.806 00:29:33.806 Elapsed time = 0.180 seconds 00:29:33.806 0 00:29:33.807 10:43:27 -- bdev/blockdev.sh@293 -- # killprocess 140889 00:29:33.807 10:43:27 -- common/autotest_common.sh@926 -- # '[' -z 140889 ']' 00:29:33.807 10:43:27 -- common/autotest_common.sh@930 -- # kill -0 140889 00:29:33.807 10:43:27 -- common/autotest_common.sh@931 -- # uname 00:29:33.807 10:43:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:33.807 10:43:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140889 00:29:33.807 killing process with pid 140889 00:29:33.807 10:43:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:33.807 10:43:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:33.807 10:43:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140889' 00:29:33.807 10:43:27 -- common/autotest_common.sh@945 -- # kill 140889 00:29:33.807 10:43:27 -- common/autotest_common.sh@950 -- # wait 140889 00:29:34.740 ************************************ 00:29:34.740 END TEST bdev_bounds 00:29:34.740 ************************************ 00:29:34.740 10:43:28 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:34.740 00:29:34.740 real 0m2.225s 00:29:34.740 user 0m5.295s 00:29:34.740 sys 0m0.317s 00:29:34.740 10:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.740 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:29:34.998 10:43:28 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:34.998 10:43:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:34.998 10:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:34.998 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:29:34.998 ************************************ 00:29:34.998 START TEST bdev_nbd 00:29:34.998 ************************************ 00:29:34.998 10:43:28 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:34.998 10:43:28 -- bdev/blockdev.sh@298 -- # uname -s 00:29:34.998 10:43:28 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:34.998 10:43:28 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.998 10:43:28 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:34.998 10:43:28 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:34.998 10:43:28 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:34.998 10:43:28 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:34.998 10:43:28 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:34.998 10:43:28 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:34.998 10:43:28 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:34.998 10:43:28 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:34.998 10:43:28 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:34.998 10:43:28 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:34.998 10:43:28 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:34.998 10:43:28 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:34.998 10:43:28 -- bdev/blockdev.sh@316 -- # nbd_pid=140951 00:29:34.998 10:43:28 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:34.998 10:43:28 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:34.998 10:43:28 -- bdev/blockdev.sh@318 -- # waitforlisten 140951 /var/tmp/spdk-nbd.sock 00:29:34.998 10:43:28 -- common/autotest_common.sh@819 -- # '[' -z 140951 ']' 00:29:34.998 10:43:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:34.998 10:43:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:34.998 10:43:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:34.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:34.998 10:43:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:34.998 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:29:34.998 [2024-07-12 10:43:28.749409] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:34.998 [2024-07-12 10:43:28.749728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.998 [2024-07-12 10:43:28.905300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.256 [2024-07-12 10:43:29.090596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.834 10:43:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.834 10:43:29 -- common/autotest_common.sh@852 -- # return 0 00:29:35.834 10:43:29 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@24 -- # local i 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:35.834 10:43:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:36.091 10:43:29 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:36.091 10:43:29 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:36.091 10:43:29 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:36.091 10:43:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:36.091 10:43:29 -- common/autotest_common.sh@857 -- # local i 00:29:36.091 10:43:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:36.091 10:43:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:36.091 10:43:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:36.091 10:43:29 -- common/autotest_common.sh@861 -- # break 00:29:36.091 10:43:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:36.091 10:43:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:36.091 10:43:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.091 1+0 records in 00:29:36.091 1+0 records out 00:29:36.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655923 s, 6.2 MB/s 00:29:36.091 10:43:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.092 10:43:29 -- common/autotest_common.sh@874 -- # size=4096 00:29:36.092 10:43:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.092 10:43:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:36.092 10:43:29 -- common/autotest_common.sh@877 -- # return 0 00:29:36.092 10:43:29 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:36.092 10:43:29 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:36.092 10:43:29 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:36.349 { 00:29:36.349 "nbd_device": "/dev/nbd0", 00:29:36.349 "bdev_name": "Nvme0n1" 00:29:36.349 } 00:29:36.349 ]' 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:36.349 { 00:29:36.349 "nbd_device": "/dev/nbd0", 00:29:36.349 "bdev_name": "Nvme0n1" 00:29:36.349 } 00:29:36.349 ]' 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@51 -- # local i 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.349 10:43:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:36.606 10:43:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@41 -- # break 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:36.607 10:43:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.864 10:43:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:36.864 10:43:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:36.864 10:43:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:36.864 10:43:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@65 -- # true 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@65 -- # count=0 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@122 -- # count=0 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@127 -- # return 0 00:29:37.121 10:43:30 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:37.121 10:43:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@12 -- # local i 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:37.122 10:43:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:37.122 /dev/nbd0 00:29:37.122 10:43:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:37.122 10:43:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:37.122 10:43:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:37.122 10:43:31 -- common/autotest_common.sh@857 -- # local i 00:29:37.122 10:43:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:37.122 10:43:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:37.122 10:43:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:37.122 10:43:31 -- common/autotest_common.sh@861 -- # break 00:29:37.122 10:43:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:37.122 10:43:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:37.122 10:43:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:37.122 1+0 records in 00:29:37.122 1+0 records out 00:29:37.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422184 s, 9.7 MB/s 00:29:37.122 10:43:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.380 10:43:31 -- common/autotest_common.sh@874 -- # size=4096 00:29:37.380 10:43:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.380 10:43:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:37.380 10:43:31 -- common/autotest_common.sh@877 -- # return 0 00:29:37.380 10:43:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:37.380 10:43:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:37.380 10:43:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:37.380 10:43:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.380 10:43:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:37.637 10:43:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:37.637 { 00:29:37.637 "nbd_device": "/dev/nbd0", 00:29:37.637 "bdev_name": "Nvme0n1" 00:29:37.637 } 00:29:37.637 ]' 00:29:37.637 10:43:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:37.637 { 00:29:37.637 "nbd_device": "/dev/nbd0", 00:29:37.637 "bdev_name": "Nvme0n1" 00:29:37.637 } 00:29:37.637 ]' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@65 -- # count=1 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@95 -- # count=1 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:37.638 256+0 records in 00:29:37.638 256+0 records out 00:29:37.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574387 s, 183 MB/s 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:37.638 256+0 records in 00:29:37.638 256+0 records out 00:29:37.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0586142 s, 17.9 MB/s 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@51 -- # local i 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.638 10:43:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:37.896 10:43:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@41 -- # break 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@45 -- # return 0 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:38.154 10:43:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@65 -- # true 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@65 -- # count=0 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@104 -- # count=0 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:38.411 10:43:32 -- bdev/nbd_common.sh@109 -- # return 0 00:29:38.412 10:43:32 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:38.412 10:43:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:38.412 10:43:32 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:38.412 10:43:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:38.412 10:43:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:38.412 10:43:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:38.669 malloc_lvol_verify 00:29:38.669 10:43:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:38.928 9a2e726a-0f8d-4802-a2ab-987fcd9b1a3a 00:29:38.928 10:43:32 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:39.187 3809a003-b43b-471a-848c-a7b2b98ab93e 00:29:39.187 10:43:32 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:39.187 /dev/nbd0 00:29:39.444 10:43:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:39.445 mke2fs 1.45.5 (07-Jan-2020) 00:29:39.445 00:29:39.445 Filesystem too small for a journal 00:29:39.445 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:39.445 00:29:39.445 Allocating group tables: 0/1 done 00:29:39.445 Writing inode tables: 0/1 done 00:29:39.445 Writing superblocks and filesystem accounting information: 0/1 done 00:29:39.445 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@51 -- # local i 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:39.445 10:43:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@41 -- # break 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@45 -- # return 0 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:39.703 10:43:33 -- bdev/nbd_common.sh@147 -- # return 0 00:29:39.703 10:43:33 -- bdev/blockdev.sh@324 -- # killprocess 140951 00:29:39.703 10:43:33 -- common/autotest_common.sh@926 -- # '[' -z 140951 ']' 00:29:39.703 10:43:33 -- common/autotest_common.sh@930 -- # kill -0 140951 00:29:39.703 10:43:33 -- common/autotest_common.sh@931 -- # uname 00:29:39.703 10:43:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:39.703 10:43:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140951 00:29:39.703 10:43:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:39.703 10:43:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:39.703 killing process with pid 140951 00:29:39.703 10:43:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140951' 00:29:39.703 10:43:33 -- common/autotest_common.sh@945 -- # kill 140951 00:29:39.703 10:43:33 -- common/autotest_common.sh@950 -- # wait 140951 00:29:40.636 ************************************ 00:29:40.636 END TEST bdev_nbd 00:29:40.636 ************************************ 00:29:40.636 10:43:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:40.636 00:29:40.636 real 0m5.749s 00:29:40.636 user 0m8.344s 00:29:40.636 sys 0m1.012s 00:29:40.636 10:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.636 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.636 10:43:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:40.636 10:43:34 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:40.636 skipping fio tests on NVMe due to multi-ns failures. 00:29:40.636 10:43:34 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:40.636 10:43:34 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:40.636 10:43:34 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:40.636 10:43:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:40.636 10:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.636 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.636 ************************************ 00:29:40.636 START TEST bdev_verify 00:29:40.636 ************************************ 00:29:40.636 10:43:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:40.894 [2024-07-12 10:43:34.559966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:40.894 [2024-07-12 10:43:34.560177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141182 ] 00:29:40.894 [2024-07-12 10:43:34.730769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:41.153 [2024-07-12 10:43:34.927643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.153 [2024-07-12 10:43:34.927659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.719 Running I/O for 5 seconds... 00:29:46.987 00:29:46.987 Latency(us) 00:29:46.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.987 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:46.987 Verification LBA range: start 0x0 length 0xa0000 00:29:46.987 Nvme0n1 : 5.01 13539.77 52.89 0.00 0.00 9416.18 1236.25 19303.33 00:29:46.987 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:46.987 Verification LBA range: start 0xa0000 length 0xa0000 00:29:46.987 Nvme0n1 : 5.01 13555.88 52.95 0.00 0.00 9405.18 318.37 17515.99 00:29:46.987 =================================================================================================================== 00:29:46.987 Total : 27095.66 105.84 0.00 0.00 9410.68 318.37 19303.33 00:29:52.252 00:29:52.252 real 0m11.095s 00:29:52.252 user 0m20.910s 00:29:52.252 sys 0m0.373s 00:29:52.252 10:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.252 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:52.252 ************************************ 00:29:52.252 END TEST bdev_verify 00:29:52.252 ************************************ 00:29:52.252 10:43:45 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:52.253 10:43:45 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:52.253 10:43:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:52.253 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:29:52.253 ************************************ 00:29:52.253 START TEST bdev_verify_big_io 00:29:52.253 ************************************ 00:29:52.253 10:43:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:52.253 [2024-07-12 10:43:45.688642] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:52.253 [2024-07-12 10:43:45.688792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141353 ] 00:29:52.253 [2024-07-12 10:43:45.842918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.253 [2024-07-12 10:43:46.021308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.253 [2024-07-12 10:43:46.021336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.818 Running I/O for 5 seconds... 00:29:58.085 00:29:58.085 Latency(us) 00:29:58.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:58.085 Verification LBA range: start 0x0 length 0xa000 00:29:58.085 Nvme0n1 : 5.03 2774.12 173.38 0.00 0.00 45622.50 599.51 59578.18 00:29:58.085 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:58.085 Verification LBA range: start 0xa000 length 0xa000 00:29:58.085 Nvme0n1 : 5.03 2140.07 133.75 0.00 0.00 59019.75 532.48 93895.21 00:29:58.085 =================================================================================================================== 00:29:58.085 Total : 4914.20 307.14 0.00 0.00 51459.41 532.48 93895.21 00:29:59.037 00:29:59.037 real 0m7.205s 00:29:59.037 user 0m13.275s 00:29:59.037 sys 0m0.281s 00:29:59.037 10:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.037 10:43:52 -- common/autotest_common.sh@10 -- # set +x 00:29:59.037 ************************************ 00:29:59.037 END TEST bdev_verify_big_io 00:29:59.037 ************************************ 00:29:59.037 10:43:52 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:59.037 10:43:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:59.037 10:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.037 10:43:52 -- common/autotest_common.sh@10 -- # set +x 00:29:59.037 ************************************ 00:29:59.037 START TEST bdev_write_zeroes 00:29:59.037 ************************************ 00:29:59.037 10:43:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:59.296 [2024-07-12 10:43:52.961473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:59.296 [2024-07-12 10:43:52.961674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141473 ] 00:29:59.296 [2024-07-12 10:43:53.129606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.555 [2024-07-12 10:43:53.324918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.119 Running I/O for 1 seconds... 00:30:01.050 00:30:01.050 Latency(us) 00:30:01.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.050 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:01.050 Nvme0n1 : 1.00 71982.47 281.18 0.00 0.00 1773.72 577.16 11558.17 00:30:01.050 =================================================================================================================== 00:30:01.050 Total : 71982.47 281.18 0.00 0.00 1773.72 577.16 11558.17 00:30:01.985 00:30:01.985 real 0m2.848s 00:30:01.985 user 0m2.476s 00:30:01.985 sys 0m0.272s 00:30:01.985 10:43:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.985 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:30:01.985 ************************************ 00:30:01.985 END TEST bdev_write_zeroes 00:30:01.985 ************************************ 00:30:01.985 10:43:55 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:01.985 10:43:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:01.985 10:43:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:01.985 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:30:01.985 ************************************ 00:30:01.985 START TEST bdev_json_nonenclosed 00:30:01.985 ************************************ 00:30:01.985 10:43:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:01.985 [2024-07-12 10:43:55.859667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:01.986 [2024-07-12 10:43:55.859892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141536 ] 00:30:02.244 [2024-07-12 10:43:56.025560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.504 [2024-07-12 10:43:56.210814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.504 [2024-07-12 10:43:56.211012] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:02.504 [2024-07-12 10:43:56.211059] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:02.762 00:30:02.762 real 0m0.752s 00:30:02.762 user 0m0.486s 00:30:02.762 sys 0m0.164s 00:30:02.762 10:43:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.763 ************************************ 00:30:02.763 END TEST bdev_json_nonenclosed 00:30:02.763 ************************************ 00:30:02.763 10:43:56 -- common/autotest_common.sh@10 -- # set +x 00:30:02.763 10:43:56 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:02.763 10:43:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:02.763 10:43:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.763 10:43:56 -- common/autotest_common.sh@10 -- # set +x 00:30:02.763 ************************************ 00:30:02.763 START TEST bdev_json_nonarray 00:30:02.763 ************************************ 00:30:02.763 10:43:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:02.763 [2024-07-12 10:43:56.652288] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:02.763 [2024-07-12 10:43:56.652438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141568 ] 00:30:03.021 [2024-07-12 10:43:56.799295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.280 [2024-07-12 10:43:56.987157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.280 [2024-07-12 10:43:56.987368] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:03.280 [2024-07-12 10:43:56.987415] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:03.539 00:30:03.539 real 0m0.724s 00:30:03.539 user 0m0.520s 00:30:03.539 sys 0m0.104s 00:30:03.539 10:43:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.539 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 ************************************ 00:30:03.539 END TEST bdev_json_nonarray 00:30:03.539 ************************************ 00:30:03.539 10:43:57 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:03.539 10:43:57 -- bdev/blockdev.sh@809 -- # cleanup 00:30:03.539 10:43:57 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:03.539 10:43:57 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:03.539 10:43:57 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:30:03.539 10:43:57 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:30:03.539 ************************************ 00:30:03.539 END TEST blockdev_nvme 00:30:03.539 ************************************ 00:30:03.539 00:30:03.539 real 0m36.456s 00:30:03.539 user 0m56.918s 00:30:03.539 sys 0m3.421s 00:30:03.539 10:43:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.539 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 10:43:57 -- spdk/autotest.sh@219 -- # uname -s 00:30:03.539 10:43:57 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:30:03.539 10:43:57 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:03.539 10:43:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:03.539 10:43:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.539 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 ************************************ 00:30:03.539 START TEST blockdev_nvme_gpt 00:30:03.539 ************************************ 00:30:03.539 10:43:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:03.797 * Looking for test storage... 00:30:03.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:03.797 10:43:57 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:03.797 10:43:57 -- bdev/nbd_common.sh@6 -- # set -e 00:30:03.797 10:43:57 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:03.797 10:43:57 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:03.797 10:43:57 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:03.797 10:43:57 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:03.797 10:43:57 -- bdev/blockdev.sh@18 -- # : 00:30:03.797 10:43:57 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:30:03.797 10:43:57 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:30:03.797 10:43:57 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:30:03.797 10:43:57 -- bdev/blockdev.sh@672 -- # uname -s 00:30:03.797 10:43:57 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:30:03.797 10:43:57 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:30:03.797 10:43:57 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:30:03.797 10:43:57 -- bdev/blockdev.sh@681 -- # crypto_device= 00:30:03.797 10:43:57 -- bdev/blockdev.sh@682 -- # dek= 00:30:03.797 10:43:57 -- bdev/blockdev.sh@683 -- # env_ctx= 00:30:03.797 10:43:57 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:30:03.797 10:43:57 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:30:03.798 10:43:57 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:30:03.798 10:43:57 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:30:03.798 10:43:57 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:30:03.798 10:43:57 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141651 00:30:03.798 10:43:57 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:03.798 10:43:57 -- bdev/blockdev.sh@47 -- # waitforlisten 141651 00:30:03.798 10:43:57 -- common/autotest_common.sh@819 -- # '[' -z 141651 ']' 00:30:03.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.798 10:43:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.798 10:43:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:03.798 10:43:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.798 10:43:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:03.798 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.798 10:43:57 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:03.798 [2024-07-12 10:43:57.576813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:03.798 [2024-07-12 10:43:57.577284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141651 ] 00:30:04.057 [2024-07-12 10:43:57.741504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.057 [2024-07-12 10:43:57.919636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:04.057 [2024-07-12 10:43:57.919859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.450 10:43:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.450 10:43:59 -- common/autotest_common.sh@852 -- # return 0 00:30:05.450 10:43:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:30:05.450 10:43:59 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:30:05.450 10:43:59 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:05.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:05.450 Waiting for block devices as requested 00:30:05.742 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:05.742 10:43:59 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:30:05.742 10:43:59 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:05.742 10:43:59 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:05.742 10:43:59 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:05.742 10:43:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:05.742 10:43:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:05.742 10:43:59 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:05.742 10:43:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:05.742 10:43:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:05.742 10:43:59 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:30:05.742 10:43:59 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:30:05.742 10:43:59 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:30:05.742 10:43:59 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:30:05.742 10:43:59 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:30:05.742 10:43:59 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:30:05.742 10:43:59 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:30:05.742 10:43:59 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:30:05.742 BYT; 00:30:05.742 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:30:05.742 10:43:59 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:30:05.742 BYT; 00:30:05.742 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:30:05.742 10:43:59 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:30:05.742 10:43:59 -- bdev/blockdev.sh@114 -- # break 00:30:05.742 10:43:59 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:30:05.742 10:43:59 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:30:05.742 10:43:59 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:05.742 10:43:59 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:30:06.721 10:44:00 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:30:06.721 10:44:00 -- scripts/common.sh@410 -- # local spdk_guid 00:30:06.721 10:44:00 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:06.721 10:44:00 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:06.721 10:44:00 -- scripts/common.sh@415 -- # IFS='()' 00:30:06.721 10:44:00 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:30:06.721 10:44:00 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:06.721 10:44:00 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:30:06.721 10:44:00 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:06.721 10:44:00 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:06.721 10:44:00 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:06.721 10:44:00 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:30:06.721 10:44:00 -- scripts/common.sh@422 -- # local spdk_guid 00:30:06.721 10:44:00 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:06.721 10:44:00 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:06.721 10:44:00 -- scripts/common.sh@427 -- # IFS='()' 00:30:06.721 10:44:00 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:30:06.721 10:44:00 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:06.721 10:44:00 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:30:06.721 10:44:00 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:06.721 10:44:00 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:06.721 10:44:00 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:06.721 10:44:00 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:30:07.655 The operation has completed successfully. 00:30:07.655 10:44:01 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:30:08.588 The operation has completed successfully. 00:30:08.588 10:44:02 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:08.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:09.104 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:10.038 10:44:03 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 [] 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:30:10.038 10:44:03 -- bdev/blockdev.sh@79 -- # local json 00:30:10.038 10:44:03 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:10.038 10:44:03 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:10.038 10:44:03 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@738 -- # cat 00:30:10.038 10:44:03 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:10.038 10:44:03 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:10.038 10:44:03 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:10.038 10:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.038 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:10.038 10:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:10.038 10:44:03 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:10.038 10:44:03 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:10.038 10:44:03 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:30:10.295 10:44:03 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:10.295 10:44:03 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:30:10.295 10:44:03 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:10.296 10:44:03 -- bdev/blockdev.sh@752 -- # killprocess 141651 00:30:10.296 10:44:03 -- common/autotest_common.sh@926 -- # '[' -z 141651 ']' 00:30:10.296 10:44:03 -- common/autotest_common.sh@930 -- # kill -0 141651 00:30:10.296 10:44:03 -- common/autotest_common.sh@931 -- # uname 00:30:10.296 10:44:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:10.296 10:44:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141651 00:30:10.296 10:44:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:10.296 killing process with pid 141651 00:30:10.296 10:44:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:10.296 10:44:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141651' 00:30:10.296 10:44:03 -- common/autotest_common.sh@945 -- # kill 141651 00:30:10.296 10:44:03 -- common/autotest_common.sh@950 -- # wait 141651 00:30:12.198 10:44:05 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:12.198 10:44:05 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:12.198 10:44:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:12.198 10:44:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:12.198 10:44:05 -- common/autotest_common.sh@10 -- # set +x 00:30:12.198 ************************************ 00:30:12.198 START TEST bdev_hello_world 00:30:12.198 ************************************ 00:30:12.198 10:44:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:12.198 [2024-07-12 10:44:05.970775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:12.198 [2024-07-12 10:44:05.970936] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142206 ] 00:30:12.458 [2024-07-12 10:44:06.125037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.458 [2024-07-12 10:44:06.305138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.026 [2024-07-12 10:44:06.715793] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:13.026 [2024-07-12 10:44:06.715876] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:30:13.026 [2024-07-12 10:44:06.715906] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:13.026 [2024-07-12 10:44:06.718480] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:13.026 [2024-07-12 10:44:06.719004] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:13.026 [2024-07-12 10:44:06.719051] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:13.026 [2024-07-12 10:44:06.719305] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:13.026 00:30:13.027 [2024-07-12 10:44:06.719367] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:13.964 ************************************ 00:30:13.964 END TEST bdev_hello_world 00:30:13.964 ************************************ 00:30:13.964 00:30:13.964 real 0m1.806s 00:30:13.964 user 0m1.443s 00:30:13.964 sys 0m0.263s 00:30:13.964 10:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.964 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.964 10:44:07 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:13.964 10:44:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:13.964 10:44:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.964 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.964 ************************************ 00:30:13.964 START TEST bdev_bounds 00:30:13.964 ************************************ 00:30:13.964 Process bdevio pid: 142256 00:30:13.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.964 10:44:07 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:30:13.964 10:44:07 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:13.964 10:44:07 -- bdev/blockdev.sh@288 -- # bdevio_pid=142256 00:30:13.964 10:44:07 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:13.964 10:44:07 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 142256' 00:30:13.964 10:44:07 -- bdev/blockdev.sh@291 -- # waitforlisten 142256 00:30:13.964 10:44:07 -- common/autotest_common.sh@819 -- # '[' -z 142256 ']' 00:30:13.964 10:44:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.964 10:44:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.964 10:44:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.964 10:44:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.964 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.964 [2024-07-12 10:44:07.832532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:13.964 [2024-07-12 10:44:07.832981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142256 ] 00:30:14.223 [2024-07-12 10:44:07.995672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.482 [2024-07-12 10:44:08.180051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.482 [2024-07-12 10:44:08.180177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.482 [2024-07-12 10:44:08.180186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.049 10:44:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:15.049 10:44:08 -- common/autotest_common.sh@852 -- # return 0 00:30:15.049 10:44:08 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:15.049 I/O targets: 00:30:15.049 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:15.049 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:15.049 00:30:15.049 00:30:15.049 CUnit - A unit testing framework for C - Version 2.1-3 00:30:15.049 http://cunit.sourceforge.net/ 00:30:15.049 00:30:15.049 00:30:15.049 Suite: bdevio tests on: Nvme0n1p2 00:30:15.049 Test: blockdev write read block ...passed 00:30:15.049 Test: blockdev write zeroes read block ...passed 00:30:15.049 Test: blockdev write zeroes read no split ...passed 00:30:15.049 Test: blockdev write zeroes read split ...passed 00:30:15.049 Test: blockdev write zeroes read split partial ...passed 00:30:15.049 Test: blockdev reset ...[2024-07-12 10:44:08.906042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:15.049 [2024-07-12 10:44:08.909693] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:15.049 passed 00:30:15.049 Test: blockdev write read 8 blocks ...passed 00:30:15.049 Test: blockdev write read size > 128k ...passed 00:30:15.049 Test: blockdev write read invalid size ...passed 00:30:15.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:15.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:15.049 Test: blockdev write read max offset ...passed 00:30:15.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:15.049 Test: blockdev writev readv 8 blocks ...passed 00:30:15.049 Test: blockdev writev readv 30 x 1block ...passed 00:30:15.049 Test: blockdev writev readv block ...passed 00:30:15.049 Test: blockdev writev readv size > 128k ...passed 00:30:15.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:15.049 Test: blockdev comparev and writev ...[2024-07-12 10:44:08.920172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x11f60b000 len:0x1000 00:30:15.049 [2024-07-12 10:44:08.920406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:15.049 passed 00:30:15.049 Test: blockdev nvme passthru rw ...passed 00:30:15.049 Test: blockdev nvme passthru vendor specific ...passed 00:30:15.049 Test: blockdev nvme admin passthru ...passed 00:30:15.049 Test: blockdev copy ...passed 00:30:15.049 Suite: bdevio tests on: Nvme0n1p1 00:30:15.049 Test: blockdev write read block ...passed 00:30:15.049 Test: blockdev write zeroes read block ...passed 00:30:15.049 Test: blockdev write zeroes read no split ...passed 00:30:15.049 Test: blockdev write zeroes read split ...passed 00:30:15.309 Test: blockdev write zeroes read split partial ...passed 00:30:15.309 Test: blockdev reset ...[2024-07-12 10:44:08.966738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:15.309 [2024-07-12 10:44:08.970085] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:15.309 passed 00:30:15.309 Test: blockdev write read 8 blocks ...passed 00:30:15.309 Test: blockdev write read size > 128k ...passed 00:30:15.309 Test: blockdev write read invalid size ...passed 00:30:15.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:15.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:15.309 Test: blockdev write read max offset ...passed 00:30:15.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:15.309 Test: blockdev writev readv 8 blocks ...passed 00:30:15.309 Test: blockdev writev readv 30 x 1block ...passed 00:30:15.309 Test: blockdev writev readv block ...passed 00:30:15.309 Test: blockdev writev readv size > 128k ...passed 00:30:15.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:15.309 Test: blockdev comparev and writev ...[2024-07-12 10:44:08.981257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x11f60d000 len:0x1000 00:30:15.309 [2024-07-12 10:44:08.981469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:15.309 passed 00:30:15.309 Test: blockdev nvme passthru rw ...passed 00:30:15.309 Test: blockdev nvme passthru vendor specific ...passed 00:30:15.309 Test: blockdev nvme admin passthru ...passed 00:30:15.309 Test: blockdev copy ...passed 00:30:15.309 00:30:15.309 Run Summary: Type Total Ran Passed Failed Inactive 00:30:15.309 suites 2 2 n/a 0 0 00:30:15.309 tests 46 46 46 0 0 00:30:15.309 asserts 284 284 284 0 n/a 00:30:15.309 00:30:15.309 Elapsed time = 0.336 seconds 00:30:15.309 0 00:30:15.309 10:44:08 -- bdev/blockdev.sh@293 -- # killprocess 142256 00:30:15.309 10:44:08 -- common/autotest_common.sh@926 -- # '[' -z 142256 ']' 00:30:15.309 10:44:08 -- common/autotest_common.sh@930 -- # kill -0 142256 00:30:15.309 10:44:08 -- common/autotest_common.sh@931 -- # uname 00:30:15.309 10:44:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:15.309 10:44:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142256 00:30:15.309 killing process with pid 142256 00:30:15.309 10:44:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:15.309 10:44:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:15.309 10:44:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142256' 00:30:15.309 10:44:09 -- common/autotest_common.sh@945 -- # kill 142256 00:30:15.309 10:44:09 -- common/autotest_common.sh@950 -- # wait 142256 00:30:16.246 ************************************ 00:30:16.246 END TEST bdev_bounds 00:30:16.246 ************************************ 00:30:16.246 10:44:09 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:16.246 00:30:16.246 real 0m2.149s 00:30:16.246 user 0m4.974s 00:30:16.246 sys 0m0.390s 00:30:16.246 10:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.246 10:44:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.246 10:44:09 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:16.246 10:44:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:16.246 10:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:16.246 10:44:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.246 ************************************ 00:30:16.246 START TEST bdev_nbd 00:30:16.246 ************************************ 00:30:16.246 10:44:09 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:16.246 10:44:09 -- bdev/blockdev.sh@298 -- # uname -s 00:30:16.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:16.246 10:44:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:16.246 10:44:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.246 10:44:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:16.246 10:44:09 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:30:16.246 10:44:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:16.246 10:44:09 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:30:16.246 10:44:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:16.246 10:44:09 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:30:16.246 10:44:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:16.246 10:44:09 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:30:16.246 10:44:09 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:30:16.246 10:44:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:16.246 10:44:09 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:30:16.246 10:44:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:16.246 10:44:09 -- bdev/blockdev.sh@316 -- # nbd_pid=142319 00:30:16.246 10:44:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:16.246 10:44:09 -- bdev/blockdev.sh@318 -- # waitforlisten 142319 /var/tmp/spdk-nbd.sock 00:30:16.246 10:44:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:16.246 10:44:09 -- common/autotest_common.sh@819 -- # '[' -z 142319 ']' 00:30:16.246 10:44:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:16.246 10:44:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:16.246 10:44:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:16.246 10:44:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:16.246 10:44:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.247 [2024-07-12 10:44:10.041989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:16.247 [2024-07-12 10:44:10.042337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.506 [2024-07-12 10:44:10.194042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.506 [2024-07-12 10:44:10.376465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.074 10:44:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:17.074 10:44:10 -- common/autotest_common.sh@852 -- # return 0 00:30:17.074 10:44:10 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@24 -- # local i 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:17.074 10:44:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:17.333 10:44:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:17.333 10:44:11 -- common/autotest_common.sh@857 -- # local i 00:30:17.333 10:44:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:17.333 10:44:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:17.333 10:44:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:17.333 10:44:11 -- common/autotest_common.sh@861 -- # break 00:30:17.333 10:44:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:17.333 10:44:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:17.333 10:44:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.333 1+0 records in 00:30:17.333 1+0 records out 00:30:17.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846462 s, 4.8 MB/s 00:30:17.333 10:44:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.333 10:44:11 -- common/autotest_common.sh@874 -- # size=4096 00:30:17.333 10:44:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.333 10:44:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:17.333 10:44:11 -- common/autotest_common.sh@877 -- # return 0 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:17.333 10:44:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:17.592 10:44:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:17.592 10:44:11 -- common/autotest_common.sh@857 -- # local i 00:30:17.592 10:44:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:17.592 10:44:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:17.592 10:44:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:17.592 10:44:11 -- common/autotest_common.sh@861 -- # break 00:30:17.592 10:44:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:17.592 10:44:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:17.592 10:44:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.592 1+0 records in 00:30:17.592 1+0 records out 00:30:17.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742137 s, 5.5 MB/s 00:30:17.592 10:44:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.592 10:44:11 -- common/autotest_common.sh@874 -- # size=4096 00:30:17.592 10:44:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.592 10:44:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:17.592 10:44:11 -- common/autotest_common.sh@877 -- # return 0 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:17.592 10:44:11 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:17.851 { 00:30:17.851 "nbd_device": "/dev/nbd0", 00:30:17.851 "bdev_name": "Nvme0n1p1" 00:30:17.851 }, 00:30:17.851 { 00:30:17.851 "nbd_device": "/dev/nbd1", 00:30:17.851 "bdev_name": "Nvme0n1p2" 00:30:17.851 } 00:30:17.851 ]' 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:17.851 { 00:30:17.851 "nbd_device": "/dev/nbd0", 00:30:17.851 "bdev_name": "Nvme0n1p1" 00:30:17.851 }, 00:30:17.851 { 00:30:17.851 "nbd_device": "/dev/nbd1", 00:30:17.851 "bdev_name": "Nvme0n1p2" 00:30:17.851 } 00:30:17.851 ]' 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@51 -- # local i 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:17.851 10:44:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:18.110 10:44:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@41 -- # break 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:18.369 10:44:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@41 -- # break 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:18.627 10:44:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@65 -- # true 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@65 -- # count=0 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@122 -- # count=0 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@127 -- # return 0 00:30:18.885 10:44:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@12 -- # local i 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:18.885 10:44:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:19.143 /dev/nbd0 00:30:19.143 10:44:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:19.143 10:44:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:19.143 10:44:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:19.143 10:44:12 -- common/autotest_common.sh@857 -- # local i 00:30:19.143 10:44:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:19.143 10:44:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:19.143 10:44:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:19.143 10:44:12 -- common/autotest_common.sh@861 -- # break 00:30:19.143 10:44:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:19.143 10:44:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:19.143 10:44:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:19.143 1+0 records in 00:30:19.143 1+0 records out 00:30:19.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705272 s, 5.8 MB/s 00:30:19.143 10:44:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.143 10:44:12 -- common/autotest_common.sh@874 -- # size=4096 00:30:19.143 10:44:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.143 10:44:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:19.143 10:44:12 -- common/autotest_common.sh@877 -- # return 0 00:30:19.143 10:44:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:19.143 10:44:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:19.143 10:44:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:19.143 /dev/nbd1 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:19.401 10:44:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:19.401 10:44:13 -- common/autotest_common.sh@857 -- # local i 00:30:19.401 10:44:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:19.401 10:44:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:19.401 10:44:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:19.401 10:44:13 -- common/autotest_common.sh@861 -- # break 00:30:19.401 10:44:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:19.401 10:44:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:19.401 10:44:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:19.401 1+0 records in 00:30:19.401 1+0 records out 00:30:19.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958986 s, 4.3 MB/s 00:30:19.401 10:44:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.401 10:44:13 -- common/autotest_common.sh@874 -- # size=4096 00:30:19.401 10:44:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:19.401 10:44:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:19.401 10:44:13 -- common/autotest_common.sh@877 -- # return 0 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:19.401 10:44:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:19.660 { 00:30:19.660 "nbd_device": "/dev/nbd0", 00:30:19.660 "bdev_name": "Nvme0n1p1" 00:30:19.660 }, 00:30:19.660 { 00:30:19.660 "nbd_device": "/dev/nbd1", 00:30:19.660 "bdev_name": "Nvme0n1p2" 00:30:19.660 } 00:30:19.660 ]' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:19.660 { 00:30:19.660 "nbd_device": "/dev/nbd0", 00:30:19.660 "bdev_name": "Nvme0n1p1" 00:30:19.660 }, 00:30:19.660 { 00:30:19.660 "nbd_device": "/dev/nbd1", 00:30:19.660 "bdev_name": "Nvme0n1p2" 00:30:19.660 } 00:30:19.660 ]' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:19.660 /dev/nbd1' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:19.660 /dev/nbd1' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@65 -- # count=2 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@95 -- # count=2 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:19.660 256+0 records in 00:30:19.660 256+0 records out 00:30:19.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608139 s, 172 MB/s 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:19.660 256+0 records in 00:30:19.660 256+0 records out 00:30:19.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0904785 s, 11.6 MB/s 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:19.660 10:44:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:19.918 256+0 records in 00:30:19.918 256+0 records out 00:30:19.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0867197 s, 12.1 MB/s 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@51 -- # local i 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:19.918 10:44:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@41 -- # break 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@45 -- # return 0 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:20.176 10:44:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:20.434 10:44:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@41 -- # break 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@45 -- # return 0 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:20.435 10:44:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@65 -- # true 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@65 -- # count=0 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@104 -- # count=0 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@109 -- # return 0 00:30:20.693 10:44:14 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:20.693 10:44:14 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:20.951 malloc_lvol_verify 00:30:21.209 10:44:14 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:21.209 b673b371-f4ca-4406-b5db-6883761a829c 00:30:21.209 10:44:15 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:21.467 fb63494f-efea-4089-8655-3d324cfeb143 00:30:21.467 10:44:15 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:21.726 /dev/nbd0 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:21.726 mke2fs 1.45.5 (07-Jan-2020) 00:30:21.726 00:30:21.726 Filesystem too small for a journal 00:30:21.726 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:21.726 00:30:21.726 Allocating group tables: 0/1 done 00:30:21.726 Writing inode tables: 0/1 done 00:30:21.726 Writing superblocks and filesystem accounting information: 0/1 done 00:30:21.726 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@51 -- # local i 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:21.726 10:44:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@41 -- # break 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@45 -- # return 0 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:21.985 10:44:15 -- bdev/nbd_common.sh@147 -- # return 0 00:30:21.985 10:44:15 -- bdev/blockdev.sh@324 -- # killprocess 142319 00:30:21.985 10:44:15 -- common/autotest_common.sh@926 -- # '[' -z 142319 ']' 00:30:21.985 10:44:15 -- common/autotest_common.sh@930 -- # kill -0 142319 00:30:21.985 10:44:15 -- common/autotest_common.sh@931 -- # uname 00:30:21.985 10:44:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:21.985 10:44:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142319 00:30:21.985 10:44:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:21.985 10:44:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:21.985 10:44:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142319' 00:30:21.985 killing process with pid 142319 00:30:21.985 10:44:15 -- common/autotest_common.sh@945 -- # kill 142319 00:30:21.985 10:44:15 -- common/autotest_common.sh@950 -- # wait 142319 00:30:23.367 10:44:16 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:23.367 00:30:23.367 real 0m6.938s 00:30:23.367 user 0m9.809s 00:30:23.367 sys 0m1.535s 00:30:23.367 10:44:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.367 10:44:16 -- common/autotest_common.sh@10 -- # set +x 00:30:23.367 ************************************ 00:30:23.367 END TEST bdev_nbd 00:30:23.367 ************************************ 00:30:23.367 skipping fio tests on NVMe due to multi-ns failures. 00:30:23.367 10:44:16 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:23.367 10:44:16 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:23.367 10:44:16 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:23.367 10:44:16 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:23.367 10:44:16 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:23.367 10:44:16 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:23.367 10:44:16 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:23.367 10:44:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:23.367 10:44:16 -- common/autotest_common.sh@10 -- # set +x 00:30:23.367 ************************************ 00:30:23.367 START TEST bdev_verify 00:30:23.367 ************************************ 00:30:23.367 10:44:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:23.367 [2024-07-12 10:44:17.050165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:23.367 [2024-07-12 10:44:17.050650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142600 ] 00:30:23.367 [2024-07-12 10:44:17.224965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:23.626 [2024-07-12 10:44:17.421781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.626 [2024-07-12 10:44:17.421798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.192 Running I/O for 5 seconds... 00:30:29.463 00:30:29.463 Latency(us) 00:30:29.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.463 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:29.463 Verification LBA range: start 0x0 length 0x4ff80 00:30:29.463 Nvme0n1p1 : 5.02 5317.22 20.77 0.00 0.00 24010.25 2561.86 23473.80 00:30:29.463 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:29.463 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:29.463 Nvme0n1p1 : 5.02 5265.24 20.57 0.00 0.00 24250.38 1459.67 28955.00 00:30:29.463 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:29.463 Verification LBA range: start 0x0 length 0x4ff7f 00:30:29.463 Nvme0n1p2 : 5.02 5315.36 20.76 0.00 0.00 23996.35 2710.81 22520.55 00:30:29.463 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:29.463 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:29.463 Nvme0n1p2 : 5.03 5261.08 20.55 0.00 0.00 24212.36 4438.57 26691.03 00:30:29.463 =================================================================================================================== 00:30:29.463 Total : 21158.90 82.65 0.00 0.00 24116.79 1459.67 28955.00 00:30:30.397 ************************************ 00:30:30.397 END TEST bdev_verify 00:30:30.397 ************************************ 00:30:30.397 00:30:30.397 real 0m7.216s 00:30:30.397 user 0m13.189s 00:30:30.397 sys 0m0.313s 00:30:30.397 10:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.397 10:44:24 -- common/autotest_common.sh@10 -- # set +x 00:30:30.397 10:44:24 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:30.397 10:44:24 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:30.397 10:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.397 10:44:24 -- common/autotest_common.sh@10 -- # set +x 00:30:30.397 ************************************ 00:30:30.398 START TEST bdev_verify_big_io 00:30:30.398 ************************************ 00:30:30.398 10:44:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:30.656 [2024-07-12 10:44:24.310792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:30.656 [2024-07-12 10:44:24.311161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142726 ] 00:30:30.656 [2024-07-12 10:44:24.480441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:30.915 [2024-07-12 10:44:24.668993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.915 [2024-07-12 10:44:24.669003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.482 Running I/O for 5 seconds... 00:30:36.752 00:30:36.752 Latency(us) 00:30:36.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.752 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:36.752 Verification LBA range: start 0x0 length 0x4ff8 00:30:36.752 Nvme0n1p1 : 5.07 1344.62 84.04 0.00 0.00 94333.40 2651.23 112483.61 00:30:36.752 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:36.752 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:36.752 Nvme0n1p1 : 5.08 1150.71 71.92 0.00 0.00 109927.34 10247.45 169678.66 00:30:36.752 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:36.752 Verification LBA range: start 0x0 length 0x4ff7 00:30:36.752 Nvme0n1p2 : 5.08 1335.31 83.46 0.00 0.00 94329.73 897.40 139174.63 00:30:36.752 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:36.752 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:36.752 Nvme0n1p2 : 5.08 1167.54 72.97 0.00 0.00 107376.62 696.32 124875.87 00:30:36.752 =================================================================================================================== 00:30:36.752 Total : 4998.18 312.39 0.00 0.00 100973.79 696.32 169678.66 00:30:38.131 ************************************ 00:30:38.131 END TEST bdev_verify_big_io 00:30:38.131 ************************************ 00:30:38.131 00:30:38.131 real 0m7.474s 00:30:38.131 user 0m13.768s 00:30:38.131 sys 0m0.282s 00:30:38.131 10:44:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.131 10:44:31 -- common/autotest_common.sh@10 -- # set +x 00:30:38.131 10:44:31 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:38.131 10:44:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:38.131 10:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:38.131 10:44:31 -- common/autotest_common.sh@10 -- # set +x 00:30:38.131 ************************************ 00:30:38.131 START TEST bdev_write_zeroes 00:30:38.131 ************************************ 00:30:38.131 10:44:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:38.131 [2024-07-12 10:44:31.828732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:38.131 [2024-07-12 10:44:31.829062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142832 ] 00:30:38.131 [2024-07-12 10:44:31.981459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.390 [2024-07-12 10:44:32.163276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.958 Running I/O for 1 seconds... 00:30:39.963 00:30:39.963 Latency(us) 00:30:39.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.963 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:39.963 Nvme0n1p1 : 1.00 27230.91 106.37 0.00 0.00 4690.82 2189.50 15073.28 00:30:39.963 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:39.963 Nvme0n1p2 : 1.01 27250.16 106.45 0.00 0.00 4681.16 2561.86 13941.29 00:30:39.963 =================================================================================================================== 00:30:39.963 Total : 54481.07 212.82 0.00 0.00 4685.99 2189.50 15073.28 00:30:40.919 ************************************ 00:30:40.920 END TEST bdev_write_zeroes 00:30:40.920 ************************************ 00:30:40.920 00:30:40.920 real 0m2.749s 00:30:40.920 user 0m2.422s 00:30:40.920 sys 0m0.221s 00:30:40.920 10:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.920 10:44:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.920 10:44:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:40.920 10:44:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:40.920 10:44:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.920 10:44:34 -- common/autotest_common.sh@10 -- # set +x 00:30:40.920 ************************************ 00:30:40.920 START TEST bdev_json_nonenclosed 00:30:40.920 ************************************ 00:30:40.920 10:44:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:40.920 [2024-07-12 10:44:34.644075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:40.920 [2024-07-12 10:44:34.644597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142906 ] 00:30:40.920 [2024-07-12 10:44:34.815687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.178 [2024-07-12 10:44:35.013470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.178 [2024-07-12 10:44:35.013893] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:41.178 [2024-07-12 10:44:35.014035] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:41.744 ************************************ 00:30:41.744 END TEST bdev_json_nonenclosed 00:30:41.744 ************************************ 00:30:41.744 00:30:41.744 real 0m0.771s 00:30:41.744 user 0m0.521s 00:30:41.744 sys 0m0.149s 00:30:41.744 10:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.744 10:44:35 -- common/autotest_common.sh@10 -- # set +x 00:30:41.744 10:44:35 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:41.744 10:44:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:41.744 10:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.744 10:44:35 -- common/autotest_common.sh@10 -- # set +x 00:30:41.744 ************************************ 00:30:41.744 START TEST bdev_json_nonarray 00:30:41.744 ************************************ 00:30:41.744 10:44:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:41.744 [2024-07-12 10:44:35.466651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:41.744 [2024-07-12 10:44:35.467018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142937 ] 00:30:41.744 [2024-07-12 10:44:35.633237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.003 [2024-07-12 10:44:35.817504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.003 [2024-07-12 10:44:35.817977] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:42.003 [2024-07-12 10:44:35.818120] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:42.262 ************************************ 00:30:42.262 END TEST bdev_json_nonarray 00:30:42.262 ************************************ 00:30:42.262 00:30:42.262 real 0m0.752s 00:30:42.262 user 0m0.526s 00:30:42.262 sys 0m0.124s 00:30:42.262 10:44:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.262 10:44:36 -- common/autotest_common.sh@10 -- # set +x 00:30:42.521 10:44:36 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:42.521 10:44:36 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:42.521 10:44:36 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:42.521 10:44:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:42.521 10:44:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.521 10:44:36 -- common/autotest_common.sh@10 -- # set +x 00:30:42.521 ************************************ 00:30:42.521 START TEST bdev_gpt_uuid 00:30:42.521 ************************************ 00:30:42.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.521 10:44:36 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:30:42.521 10:44:36 -- bdev/blockdev.sh@612 -- # local bdev 00:30:42.521 10:44:36 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:42.521 10:44:36 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142975 00:30:42.521 10:44:36 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:42.521 10:44:36 -- bdev/blockdev.sh@47 -- # waitforlisten 142975 00:30:42.521 10:44:36 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:42.521 10:44:36 -- common/autotest_common.sh@819 -- # '[' -z 142975 ']' 00:30:42.521 10:44:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.521 10:44:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:42.521 10:44:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.521 10:44:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:42.521 10:44:36 -- common/autotest_common.sh@10 -- # set +x 00:30:42.521 [2024-07-12 10:44:36.282704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:42.521 [2024-07-12 10:44:36.283337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142975 ] 00:30:42.780 [2024-07-12 10:44:36.451548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.780 [2024-07-12 10:44:36.631121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:42.780 [2024-07-12 10:44:36.631642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.155 10:44:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:44.155 10:44:37 -- common/autotest_common.sh@852 -- # return 0 00:30:44.155 10:44:37 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:44.155 10:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.155 10:44:37 -- common/autotest_common.sh@10 -- # set +x 00:30:44.155 Some configs were skipped because the RPC state that can call them passed over. 00:30:44.155 10:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.155 10:44:37 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:44.155 10:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.155 10:44:37 -- common/autotest_common.sh@10 -- # set +x 00:30:44.155 10:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.155 10:44:38 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:44.155 10:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.155 10:44:38 -- common/autotest_common.sh@10 -- # set +x 00:30:44.155 10:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.155 10:44:38 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:44.155 { 00:30:44.155 "name": "Nvme0n1p1", 00:30:44.155 "aliases": [ 00:30:44.155 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:44.155 ], 00:30:44.155 "product_name": "GPT Disk", 00:30:44.156 "block_size": 4096, 00:30:44.156 "num_blocks": 655104, 00:30:44.156 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:44.156 "assigned_rate_limits": { 00:30:44.156 "rw_ios_per_sec": 0, 00:30:44.156 "rw_mbytes_per_sec": 0, 00:30:44.156 "r_mbytes_per_sec": 0, 00:30:44.156 "w_mbytes_per_sec": 0 00:30:44.156 }, 00:30:44.156 "claimed": false, 00:30:44.156 "zoned": false, 00:30:44.156 "supported_io_types": { 00:30:44.156 "read": true, 00:30:44.156 "write": true, 00:30:44.156 "unmap": true, 00:30:44.156 "write_zeroes": true, 00:30:44.156 "flush": true, 00:30:44.156 "reset": true, 00:30:44.156 "compare": true, 00:30:44.156 "compare_and_write": false, 00:30:44.156 "abort": true, 00:30:44.156 "nvme_admin": false, 00:30:44.156 "nvme_io": false 00:30:44.156 }, 00:30:44.156 "driver_specific": { 00:30:44.156 "gpt": { 00:30:44.156 "base_bdev": "Nvme0n1", 00:30:44.156 "offset_blocks": 256, 00:30:44.156 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:44.156 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:44.156 "partition_name": "SPDK_TEST_first" 00:30:44.156 } 00:30:44.156 } 00:30:44.156 } 00:30:44.156 ]' 00:30:44.156 10:44:38 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:44.414 10:44:38 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:44.414 10:44:38 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:44.414 10:44:38 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:44.414 10:44:38 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:44.414 10:44:38 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:44.414 10:44:38 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:44.414 10:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.414 10:44:38 -- common/autotest_common.sh@10 -- # set +x 00:30:44.414 10:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.414 10:44:38 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:44.414 { 00:30:44.414 "name": "Nvme0n1p2", 00:30:44.414 "aliases": [ 00:30:44.414 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:44.414 ], 00:30:44.414 "product_name": "GPT Disk", 00:30:44.414 "block_size": 4096, 00:30:44.414 "num_blocks": 655103, 00:30:44.414 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:44.414 "assigned_rate_limits": { 00:30:44.414 "rw_ios_per_sec": 0, 00:30:44.414 "rw_mbytes_per_sec": 0, 00:30:44.414 "r_mbytes_per_sec": 0, 00:30:44.414 "w_mbytes_per_sec": 0 00:30:44.414 }, 00:30:44.414 "claimed": false, 00:30:44.414 "zoned": false, 00:30:44.414 "supported_io_types": { 00:30:44.414 "read": true, 00:30:44.414 "write": true, 00:30:44.414 "unmap": true, 00:30:44.414 "write_zeroes": true, 00:30:44.414 "flush": true, 00:30:44.414 "reset": true, 00:30:44.414 "compare": true, 00:30:44.414 "compare_and_write": false, 00:30:44.414 "abort": true, 00:30:44.414 "nvme_admin": false, 00:30:44.414 "nvme_io": false 00:30:44.414 }, 00:30:44.414 "driver_specific": { 00:30:44.414 "gpt": { 00:30:44.414 "base_bdev": "Nvme0n1", 00:30:44.414 "offset_blocks": 655360, 00:30:44.414 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:44.414 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:44.414 "partition_name": "SPDK_TEST_second" 00:30:44.414 } 00:30:44.414 } 00:30:44.414 } 00:30:44.414 ]' 00:30:44.414 10:44:38 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:44.414 10:44:38 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:44.414 10:44:38 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:44.673 10:44:38 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:44.673 10:44:38 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:44.673 10:44:38 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:44.673 10:44:38 -- bdev/blockdev.sh@629 -- # killprocess 142975 00:30:44.673 10:44:38 -- common/autotest_common.sh@926 -- # '[' -z 142975 ']' 00:30:44.673 10:44:38 -- common/autotest_common.sh@930 -- # kill -0 142975 00:30:44.673 10:44:38 -- common/autotest_common.sh@931 -- # uname 00:30:44.673 10:44:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:44.673 10:44:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142975 00:30:44.673 10:44:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:44.673 10:44:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:44.673 10:44:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142975' 00:30:44.673 killing process with pid 142975 00:30:44.673 10:44:38 -- common/autotest_common.sh@945 -- # kill 142975 00:30:44.673 10:44:38 -- common/autotest_common.sh@950 -- # wait 142975 00:30:46.578 ************************************ 00:30:46.578 END TEST bdev_gpt_uuid 00:30:46.578 ************************************ 00:30:46.578 00:30:46.578 real 0m4.134s 00:30:46.578 user 0m4.575s 00:30:46.578 sys 0m0.494s 00:30:46.578 10:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.578 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:30:46.578 10:44:40 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:46.578 10:44:40 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:46.578 10:44:40 -- bdev/blockdev.sh@809 -- # cleanup 00:30:46.578 10:44:40 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:46.578 10:44:40 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:46.578 10:44:40 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:46.578 10:44:40 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:46.578 10:44:40 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:46.578 10:44:40 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:46.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:46.836 Waiting for block devices as requested 00:30:47.095 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:47.095 10:44:40 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:47.095 10:44:40 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:47.095 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:47.095 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:47.095 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:47.095 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:47.095 10:44:40 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:47.095 ************************************ 00:30:47.095 END TEST blockdev_nvme_gpt 00:30:47.095 ************************************ 00:30:47.095 00:30:47.095 real 0m43.473s 00:30:47.095 user 1m0.737s 00:30:47.095 sys 0m5.984s 00:30:47.095 10:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.095 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:30:47.095 10:44:40 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:47.095 10:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:47.095 10:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.095 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:30:47.095 ************************************ 00:30:47.095 START TEST nvme 00:30:47.095 ************************************ 00:30:47.095 10:44:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:47.095 * Looking for test storage... 00:30:47.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:47.095 10:44:40 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:47.669 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:48.602 10:44:42 -- nvme/nvme.sh@79 -- # uname 00:30:48.602 10:44:42 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:48.602 10:44:42 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:48.602 10:44:42 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:48.602 10:44:42 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:48.602 Waiting for stub to ready for secondary processes... 00:30:48.602 10:44:42 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:30:48.602 10:44:42 -- common/autotest_common.sh@1045 -- # echo 0 00:30:48.602 10:44:42 -- common/autotest_common.sh@1047 -- # stubpid=143429 00:30:48.602 10:44:42 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:30:48.602 10:44:42 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:48.602 10:44:42 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:48.602 10:44:42 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143429 ]] 00:30:48.602 10:44:42 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:48.861 [2024-07-12 10:44:42.521340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:48.861 [2024-07-12 10:44:42.521724] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.798 10:44:43 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:49.798 10:44:43 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143429 ]] 00:30:49.798 10:44:43 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:50.056 [2024-07-12 10:44:43.804411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.312 [2024-07-12 10:44:44.005292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.312 [2024-07-12 10:44:44.005409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.312 [2024-07-12 10:44:44.005406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.312 [2024-07-12 10:44:44.019642] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:50.312 [2024-07-12 10:44:44.028132] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:50.312 [2024-07-12 10:44:44.028641] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:50.568 done. 00:30:50.568 10:44:44 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:50.568 10:44:44 -- common/autotest_common.sh@1054 -- # echo done. 00:30:50.568 10:44:44 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:50.568 10:44:44 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:30:50.568 10:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.568 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:50.825 ************************************ 00:30:50.825 START TEST nvme_reset 00:30:50.825 ************************************ 00:30:50.825 10:44:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:51.083 Initializing NVMe Controllers 00:30:51.083 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:51.083 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:51.083 ************************************ 00:30:51.083 END TEST nvme_reset 00:30:51.083 ************************************ 00:30:51.083 00:30:51.083 real 0m0.289s 00:30:51.083 user 0m0.105s 00:30:51.083 sys 0m0.109s 00:30:51.083 10:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.083 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:51.083 10:44:44 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:51.083 10:44:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:51.083 10:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.083 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:30:51.083 ************************************ 00:30:51.083 START TEST nvme_identify 00:30:51.083 ************************************ 00:30:51.084 10:44:44 -- common/autotest_common.sh@1104 -- # nvme_identify 00:30:51.084 10:44:44 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:51.084 10:44:44 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:51.084 10:44:44 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:51.084 10:44:44 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:51.084 10:44:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:51.084 10:44:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:51.084 10:44:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:51.084 10:44:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:51.084 10:44:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:51.084 10:44:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:51.084 10:44:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:51.084 10:44:44 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:51.342 [2024-07-12 10:44:45.155943] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 143472 terminated unexpected 00:30:51.342 ===================================================== 00:30:51.342 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:51.342 ===================================================== 00:30:51.342 Controller Capabilities/Features 00:30:51.342 ================================ 00:30:51.342 Vendor ID: 1b36 00:30:51.342 Subsystem Vendor ID: 1af4 00:30:51.342 Serial Number: 12340 00:30:51.342 Model Number: QEMU NVMe Ctrl 00:30:51.342 Firmware Version: 8.0.0 00:30:51.342 Recommended Arb Burst: 6 00:30:51.342 IEEE OUI Identifier: 00 54 52 00:30:51.342 Multi-path I/O 00:30:51.342 May have multiple subsystem ports: No 00:30:51.342 May have multiple controllers: No 00:30:51.342 Associated with SR-IOV VF: No 00:30:51.342 Max Data Transfer Size: 524288 00:30:51.342 Max Number of Namespaces: 256 00:30:51.342 Max Number of I/O Queues: 64 00:30:51.342 NVMe Specification Version (VS): 1.4 00:30:51.342 NVMe Specification Version (Identify): 1.4 00:30:51.342 Maximum Queue Entries: 2048 00:30:51.342 Contiguous Queues Required: Yes 00:30:51.342 Arbitration Mechanisms Supported 00:30:51.342 Weighted Round Robin: Not Supported 00:30:51.342 Vendor Specific: Not Supported 00:30:51.342 Reset Timeout: 7500 ms 00:30:51.342 Doorbell Stride: 4 bytes 00:30:51.342 NVM Subsystem Reset: Not Supported 00:30:51.342 Command Sets Supported 00:30:51.342 NVM Command Set: Supported 00:30:51.342 Boot Partition: Not Supported 00:30:51.342 Memory Page Size Minimum: 4096 bytes 00:30:51.342 Memory Page Size Maximum: 65536 bytes 00:30:51.342 Persistent Memory Region: Not Supported 00:30:51.342 Optional Asynchronous Events Supported 00:30:51.342 Namespace Attribute Notices: Supported 00:30:51.342 Firmware Activation Notices: Not Supported 00:30:51.342 ANA Change Notices: Not Supported 00:30:51.342 PLE Aggregate Log Change Notices: Not Supported 00:30:51.342 LBA Status Info Alert Notices: Not Supported 00:30:51.342 EGE Aggregate Log Change Notices: Not Supported 00:30:51.342 Normal NVM Subsystem Shutdown event: Not Supported 00:30:51.342 Zone Descriptor Change Notices: Not Supported 00:30:51.342 Discovery Log Change Notices: Not Supported 00:30:51.342 Controller Attributes 00:30:51.343 128-bit Host Identifier: Not Supported 00:30:51.343 Non-Operational Permissive Mode: Not Supported 00:30:51.343 NVM Sets: Not Supported 00:30:51.343 Read Recovery Levels: Not Supported 00:30:51.343 Endurance Groups: Not Supported 00:30:51.343 Predictable Latency Mode: Not Supported 00:30:51.343 Traffic Based Keep ALive: Not Supported 00:30:51.343 Namespace Granularity: Not Supported 00:30:51.343 SQ Associations: Not Supported 00:30:51.343 UUID List: Not Supported 00:30:51.343 Multi-Domain Subsystem: Not Supported 00:30:51.343 Fixed Capacity Management: Not Supported 00:30:51.343 Variable Capacity Management: Not Supported 00:30:51.343 Delete Endurance Group: Not Supported 00:30:51.343 Delete NVM Set: Not Supported 00:30:51.343 Extended LBA Formats Supported: Supported 00:30:51.343 Flexible Data Placement Supported: Not Supported 00:30:51.343 00:30:51.343 Controller Memory Buffer Support 00:30:51.343 ================================ 00:30:51.343 Supported: No 00:30:51.343 00:30:51.343 Persistent Memory Region Support 00:30:51.343 ================================ 00:30:51.343 Supported: No 00:30:51.343 00:30:51.343 Admin Command Set Attributes 00:30:51.343 ============================ 00:30:51.343 Security Send/Receive: Not Supported 00:30:51.343 Format NVM: Supported 00:30:51.343 Firmware Activate/Download: Not Supported 00:30:51.343 Namespace Management: Supported 00:30:51.343 Device Self-Test: Not Supported 00:30:51.343 Directives: Supported 00:30:51.343 NVMe-MI: Not Supported 00:30:51.343 Virtualization Management: Not Supported 00:30:51.343 Doorbell Buffer Config: Supported 00:30:51.343 Get LBA Status Capability: Not Supported 00:30:51.343 Command & Feature Lockdown Capability: Not Supported 00:30:51.343 Abort Command Limit: 4 00:30:51.343 Async Event Request Limit: 4 00:30:51.343 Number of Firmware Slots: N/A 00:30:51.343 Firmware Slot 1 Read-Only: N/A 00:30:51.343 Firmware Activation Without Reset: N/A 00:30:51.343 Multiple Update Detection Support: N/A 00:30:51.343 Firmware Update Granularity: No Information Provided 00:30:51.343 Per-Namespace SMART Log: Yes 00:30:51.343 Asymmetric Namespace Access Log Page: Not Supported 00:30:51.343 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:51.343 Command Effects Log Page: Supported 00:30:51.343 Get Log Page Extended Data: Supported 00:30:51.343 Telemetry Log Pages: Not Supported 00:30:51.343 Persistent Event Log Pages: Not Supported 00:30:51.343 Supported Log Pages Log Page: May Support 00:30:51.343 Commands Supported & Effects Log Page: Not Supported 00:30:51.343 Feature Identifiers & Effects Log Page:May Support 00:30:51.343 NVMe-MI Commands & Effects Log Page: May Support 00:30:51.343 Data Area 4 for Telemetry Log: Not Supported 00:30:51.343 Error Log Page Entries Supported: 1 00:30:51.343 Keep Alive: Not Supported 00:30:51.343 00:30:51.343 NVM Command Set Attributes 00:30:51.343 ========================== 00:30:51.343 Submission Queue Entry Size 00:30:51.343 Max: 64 00:30:51.343 Min: 64 00:30:51.343 Completion Queue Entry Size 00:30:51.343 Max: 16 00:30:51.343 Min: 16 00:30:51.343 Number of Namespaces: 256 00:30:51.343 Compare Command: Supported 00:30:51.343 Write Uncorrectable Command: Not Supported 00:30:51.343 Dataset Management Command: Supported 00:30:51.343 Write Zeroes Command: Supported 00:30:51.343 Set Features Save Field: Supported 00:30:51.343 Reservations: Not Supported 00:30:51.343 Timestamp: Supported 00:30:51.343 Copy: Supported 00:30:51.343 Volatile Write Cache: Present 00:30:51.343 Atomic Write Unit (Normal): 1 00:30:51.343 Atomic Write Unit (PFail): 1 00:30:51.343 Atomic Compare & Write Unit: 1 00:30:51.343 Fused Compare & Write: Not Supported 00:30:51.343 Scatter-Gather List 00:30:51.343 SGL Command Set: Supported 00:30:51.343 SGL Keyed: Not Supported 00:30:51.343 SGL Bit Bucket Descriptor: Not Supported 00:30:51.343 SGL Metadata Pointer: Not Supported 00:30:51.343 Oversized SGL: Not Supported 00:30:51.343 SGL Metadata Address: Not Supported 00:30:51.343 SGL Offset: Not Supported 00:30:51.343 Transport SGL Data Block: Not Supported 00:30:51.343 Replay Protected Memory Block: Not Supported 00:30:51.343 00:30:51.343 Firmware Slot Information 00:30:51.343 ========================= 00:30:51.343 Active slot: 1 00:30:51.343 Slot 1 Firmware Revision: 1.0 00:30:51.343 00:30:51.343 00:30:51.343 Commands Supported and Effects 00:30:51.343 ============================== 00:30:51.343 Admin Commands 00:30:51.343 -------------- 00:30:51.343 Delete I/O Submission Queue (00h): Supported 00:30:51.343 Create I/O Submission Queue (01h): Supported 00:30:51.343 Get Log Page (02h): Supported 00:30:51.343 Delete I/O Completion Queue (04h): Supported 00:30:51.343 Create I/O Completion Queue (05h): Supported 00:30:51.343 Identify (06h): Supported 00:30:51.343 Abort (08h): Supported 00:30:51.343 Set Features (09h): Supported 00:30:51.343 Get Features (0Ah): Supported 00:30:51.343 Asynchronous Event Request (0Ch): Supported 00:30:51.343 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:51.343 Directive Send (19h): Supported 00:30:51.343 Directive Receive (1Ah): Supported 00:30:51.343 Virtualization Management (1Ch): Supported 00:30:51.343 Doorbell Buffer Config (7Ch): Supported 00:30:51.343 Format NVM (80h): Supported LBA-Change 00:30:51.343 I/O Commands 00:30:51.343 ------------ 00:30:51.343 Flush (00h): Supported LBA-Change 00:30:51.343 Write (01h): Supported LBA-Change 00:30:51.343 Read (02h): Supported 00:30:51.343 Compare (05h): Supported 00:30:51.343 Write Zeroes (08h): Supported LBA-Change 00:30:51.343 Dataset Management (09h): Supported LBA-Change 00:30:51.343 Unknown (0Ch): Supported 00:30:51.343 Unknown (12h): Supported 00:30:51.343 Copy (19h): Supported LBA-Change 00:30:51.343 Unknown (1Dh): Supported LBA-Change 00:30:51.343 00:30:51.343 Error Log 00:30:51.343 ========= 00:30:51.343 00:30:51.343 Arbitration 00:30:51.343 =========== 00:30:51.343 Arbitration Burst: no limit 00:30:51.343 00:30:51.343 Power Management 00:30:51.343 ================ 00:30:51.343 Number of Power States: 1 00:30:51.343 Current Power State: Power State #0 00:30:51.343 Power State #0: 00:30:51.343 Max Power: 25.00 W 00:30:51.343 Non-Operational State: Operational 00:30:51.343 Entry Latency: 16 microseconds 00:30:51.343 Exit Latency: 4 microseconds 00:30:51.343 Relative Read Throughput: 0 00:30:51.343 Relative Read Latency: 0 00:30:51.343 Relative Write Throughput: 0 00:30:51.343 Relative Write Latency: 0 00:30:51.343 Idle Power: Not Reported 00:30:51.343 Active Power: Not Reported 00:30:51.343 Non-Operational Permissive Mode: Not Supported 00:30:51.343 00:30:51.343 Health Information 00:30:51.343 ================== 00:30:51.343 Critical Warnings: 00:30:51.343 Available Spare Space: OK 00:30:51.343 Temperature: OK 00:30:51.343 Device Reliability: OK 00:30:51.343 Read Only: No 00:30:51.343 Volatile Memory Backup: OK 00:30:51.343 Current Temperature: 323 Kelvin (50 Celsius) 00:30:51.343 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:51.343 Available Spare: 0% 00:30:51.343 Available Spare Threshold: 0% 00:30:51.343 Life Percentage Used: 0% 00:30:51.343 Data Units Read: 8867 00:30:51.343 Data Units Written: 4328 00:30:51.343 Host Read Commands: 296473 00:30:51.343 Host Write Commands: 163320 00:30:51.343 Controller Busy Time: 0 minutes 00:30:51.343 Power Cycles: 0 00:30:51.343 Power On Hours: 0 hours 00:30:51.343 Unsafe Shutdowns: 0 00:30:51.343 Unrecoverable Media Errors: 0 00:30:51.343 Lifetime Error Log Entries: 0 00:30:51.343 Warning Temperature Time: 0 minutes 00:30:51.343 Critical Temperature Time: 0 minutes 00:30:51.343 00:30:51.343 Number of Queues 00:30:51.343 ================ 00:30:51.343 Number of I/O Submission Queues: 64 00:30:51.343 Number of I/O Completion Queues: 64 00:30:51.343 00:30:51.343 ZNS Specific Controller Data 00:30:51.343 ============================ 00:30:51.343 Zone Append Size Limit: 0 00:30:51.343 00:30:51.343 00:30:51.343 Active Namespaces 00:30:51.343 ================= 00:30:51.343 Namespace ID:1 00:30:51.343 Error Recovery Timeout: Unlimited 00:30:51.343 Command Set Identifier: NVM (00h) 00:30:51.343 Deallocate: Supported 00:30:51.343 Deallocated/Unwritten Error: Supported 00:30:51.343 Deallocated Read Value: All 0x00 00:30:51.343 Deallocate in Write Zeroes: Not Supported 00:30:51.343 Deallocated Guard Field: 0xFFFF 00:30:51.343 Flush: Supported 00:30:51.343 Reservation: Not Supported 00:30:51.343 Namespace Sharing Capabilities: Private 00:30:51.343 Size (in LBAs): 1310720 (5GiB) 00:30:51.343 Capacity (in LBAs): 1310720 (5GiB) 00:30:51.343 Utilization (in LBAs): 1310720 (5GiB) 00:30:51.343 Thin Provisioning: Not Supported 00:30:51.343 Per-NS Atomic Units: No 00:30:51.343 Maximum Single Source Range Length: 128 00:30:51.343 Maximum Copy Length: 128 00:30:51.343 Maximum Source Range Count: 128 00:30:51.343 NGUID/EUI64 Never Reused: No 00:30:51.343 Namespace Write Protected: No 00:30:51.344 Number of LBA Formats: 8 00:30:51.344 Current LBA Format: LBA Format #04 00:30:51.344 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:51.344 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:51.344 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:51.344 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:51.344 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:51.344 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:51.344 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:51.344 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:51.344 00:30:51.344 10:44:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:51.344 10:44:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:51.602 ===================================================== 00:30:51.602 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:51.602 ===================================================== 00:30:51.602 Controller Capabilities/Features 00:30:51.602 ================================ 00:30:51.602 Vendor ID: 1b36 00:30:51.602 Subsystem Vendor ID: 1af4 00:30:51.602 Serial Number: 12340 00:30:51.602 Model Number: QEMU NVMe Ctrl 00:30:51.602 Firmware Version: 8.0.0 00:30:51.602 Recommended Arb Burst: 6 00:30:51.602 IEEE OUI Identifier: 00 54 52 00:30:51.602 Multi-path I/O 00:30:51.602 May have multiple subsystem ports: No 00:30:51.602 May have multiple controllers: No 00:30:51.602 Associated with SR-IOV VF: No 00:30:51.602 Max Data Transfer Size: 524288 00:30:51.602 Max Number of Namespaces: 256 00:30:51.602 Max Number of I/O Queues: 64 00:30:51.602 NVMe Specification Version (VS): 1.4 00:30:51.602 NVMe Specification Version (Identify): 1.4 00:30:51.602 Maximum Queue Entries: 2048 00:30:51.602 Contiguous Queues Required: Yes 00:30:51.602 Arbitration Mechanisms Supported 00:30:51.602 Weighted Round Robin: Not Supported 00:30:51.602 Vendor Specific: Not Supported 00:30:51.602 Reset Timeout: 7500 ms 00:30:51.602 Doorbell Stride: 4 bytes 00:30:51.602 NVM Subsystem Reset: Not Supported 00:30:51.602 Command Sets Supported 00:30:51.602 NVM Command Set: Supported 00:30:51.602 Boot Partition: Not Supported 00:30:51.602 Memory Page Size Minimum: 4096 bytes 00:30:51.602 Memory Page Size Maximum: 65536 bytes 00:30:51.602 Persistent Memory Region: Not Supported 00:30:51.602 Optional Asynchronous Events Supported 00:30:51.602 Namespace Attribute Notices: Supported 00:30:51.602 Firmware Activation Notices: Not Supported 00:30:51.602 ANA Change Notices: Not Supported 00:30:51.602 PLE Aggregate Log Change Notices: Not Supported 00:30:51.602 LBA Status Info Alert Notices: Not Supported 00:30:51.602 EGE Aggregate Log Change Notices: Not Supported 00:30:51.602 Normal NVM Subsystem Shutdown event: Not Supported 00:30:51.602 Zone Descriptor Change Notices: Not Supported 00:30:51.602 Discovery Log Change Notices: Not Supported 00:30:51.602 Controller Attributes 00:30:51.602 128-bit Host Identifier: Not Supported 00:30:51.602 Non-Operational Permissive Mode: Not Supported 00:30:51.602 NVM Sets: Not Supported 00:30:51.602 Read Recovery Levels: Not Supported 00:30:51.602 Endurance Groups: Not Supported 00:30:51.602 Predictable Latency Mode: Not Supported 00:30:51.602 Traffic Based Keep ALive: Not Supported 00:30:51.602 Namespace Granularity: Not Supported 00:30:51.602 SQ Associations: Not Supported 00:30:51.602 UUID List: Not Supported 00:30:51.602 Multi-Domain Subsystem: Not Supported 00:30:51.602 Fixed Capacity Management: Not Supported 00:30:51.602 Variable Capacity Management: Not Supported 00:30:51.602 Delete Endurance Group: Not Supported 00:30:51.602 Delete NVM Set: Not Supported 00:30:51.602 Extended LBA Formats Supported: Supported 00:30:51.602 Flexible Data Placement Supported: Not Supported 00:30:51.602 00:30:51.602 Controller Memory Buffer Support 00:30:51.602 ================================ 00:30:51.602 Supported: No 00:30:51.602 00:30:51.602 Persistent Memory Region Support 00:30:51.602 ================================ 00:30:51.602 Supported: No 00:30:51.602 00:30:51.602 Admin Command Set Attributes 00:30:51.602 ============================ 00:30:51.602 Security Send/Receive: Not Supported 00:30:51.602 Format NVM: Supported 00:30:51.602 Firmware Activate/Download: Not Supported 00:30:51.602 Namespace Management: Supported 00:30:51.602 Device Self-Test: Not Supported 00:30:51.602 Directives: Supported 00:30:51.602 NVMe-MI: Not Supported 00:30:51.602 Virtualization Management: Not Supported 00:30:51.602 Doorbell Buffer Config: Supported 00:30:51.602 Get LBA Status Capability: Not Supported 00:30:51.602 Command & Feature Lockdown Capability: Not Supported 00:30:51.602 Abort Command Limit: 4 00:30:51.602 Async Event Request Limit: 4 00:30:51.602 Number of Firmware Slots: N/A 00:30:51.602 Firmware Slot 1 Read-Only: N/A 00:30:51.602 Firmware Activation Without Reset: N/A 00:30:51.602 Multiple Update Detection Support: N/A 00:30:51.602 Firmware Update Granularity: No Information Provided 00:30:51.603 Per-Namespace SMART Log: Yes 00:30:51.603 Asymmetric Namespace Access Log Page: Not Supported 00:30:51.603 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:51.603 Command Effects Log Page: Supported 00:30:51.603 Get Log Page Extended Data: Supported 00:30:51.603 Telemetry Log Pages: Not Supported 00:30:51.603 Persistent Event Log Pages: Not Supported 00:30:51.603 Supported Log Pages Log Page: May Support 00:30:51.603 Commands Supported & Effects Log Page: Not Supported 00:30:51.603 Feature Identifiers & Effects Log Page:May Support 00:30:51.603 NVMe-MI Commands & Effects Log Page: May Support 00:30:51.603 Data Area 4 for Telemetry Log: Not Supported 00:30:51.603 Error Log Page Entries Supported: 1 00:30:51.603 Keep Alive: Not Supported 00:30:51.603 00:30:51.603 NVM Command Set Attributes 00:30:51.603 ========================== 00:30:51.603 Submission Queue Entry Size 00:30:51.603 Max: 64 00:30:51.603 Min: 64 00:30:51.603 Completion Queue Entry Size 00:30:51.603 Max: 16 00:30:51.603 Min: 16 00:30:51.603 Number of Namespaces: 256 00:30:51.603 Compare Command: Supported 00:30:51.603 Write Uncorrectable Command: Not Supported 00:30:51.603 Dataset Management Command: Supported 00:30:51.603 Write Zeroes Command: Supported 00:30:51.603 Set Features Save Field: Supported 00:30:51.603 Reservations: Not Supported 00:30:51.603 Timestamp: Supported 00:30:51.603 Copy: Supported 00:30:51.603 Volatile Write Cache: Present 00:30:51.603 Atomic Write Unit (Normal): 1 00:30:51.603 Atomic Write Unit (PFail): 1 00:30:51.603 Atomic Compare & Write Unit: 1 00:30:51.603 Fused Compare & Write: Not Supported 00:30:51.603 Scatter-Gather List 00:30:51.603 SGL Command Set: Supported 00:30:51.603 SGL Keyed: Not Supported 00:30:51.603 SGL Bit Bucket Descriptor: Not Supported 00:30:51.603 SGL Metadata Pointer: Not Supported 00:30:51.603 Oversized SGL: Not Supported 00:30:51.603 SGL Metadata Address: Not Supported 00:30:51.603 SGL Offset: Not Supported 00:30:51.603 Transport SGL Data Block: Not Supported 00:30:51.603 Replay Protected Memory Block: Not Supported 00:30:51.603 00:30:51.603 Firmware Slot Information 00:30:51.603 ========================= 00:30:51.603 Active slot: 1 00:30:51.603 Slot 1 Firmware Revision: 1.0 00:30:51.603 00:30:51.603 00:30:51.603 Commands Supported and Effects 00:30:51.603 ============================== 00:30:51.603 Admin Commands 00:30:51.603 -------------- 00:30:51.603 Delete I/O Submission Queue (00h): Supported 00:30:51.603 Create I/O Submission Queue (01h): Supported 00:30:51.603 Get Log Page (02h): Supported 00:30:51.603 Delete I/O Completion Queue (04h): Supported 00:30:51.603 Create I/O Completion Queue (05h): Supported 00:30:51.603 Identify (06h): Supported 00:30:51.603 Abort (08h): Supported 00:30:51.603 Set Features (09h): Supported 00:30:51.603 Get Features (0Ah): Supported 00:30:51.603 Asynchronous Event Request (0Ch): Supported 00:30:51.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:51.603 Directive Send (19h): Supported 00:30:51.603 Directive Receive (1Ah): Supported 00:30:51.603 Virtualization Management (1Ch): Supported 00:30:51.603 Doorbell Buffer Config (7Ch): Supported 00:30:51.603 Format NVM (80h): Supported LBA-Change 00:30:51.603 I/O Commands 00:30:51.603 ------------ 00:30:51.603 Flush (00h): Supported LBA-Change 00:30:51.603 Write (01h): Supported LBA-Change 00:30:51.603 Read (02h): Supported 00:30:51.603 Compare (05h): Supported 00:30:51.603 Write Zeroes (08h): Supported LBA-Change 00:30:51.603 Dataset Management (09h): Supported LBA-Change 00:30:51.603 Unknown (0Ch): Supported 00:30:51.603 Unknown (12h): Supported 00:30:51.603 Copy (19h): Supported LBA-Change 00:30:51.603 Unknown (1Dh): Supported LBA-Change 00:30:51.603 00:30:51.603 Error Log 00:30:51.603 ========= 00:30:51.603 00:30:51.603 Arbitration 00:30:51.603 =========== 00:30:51.603 Arbitration Burst: no limit 00:30:51.603 00:30:51.603 Power Management 00:30:51.603 ================ 00:30:51.603 Number of Power States: 1 00:30:51.603 Current Power State: Power State #0 00:30:51.603 Power State #0: 00:30:51.603 Max Power: 25.00 W 00:30:51.603 Non-Operational State: Operational 00:30:51.603 Entry Latency: 16 microseconds 00:30:51.603 Exit Latency: 4 microseconds 00:30:51.603 Relative Read Throughput: 0 00:30:51.603 Relative Read Latency: 0 00:30:51.603 Relative Write Throughput: 0 00:30:51.603 Relative Write Latency: 0 00:30:51.861 Idle Power: Not Reported 00:30:51.861 Active Power: Not Reported 00:30:51.861 Non-Operational Permissive Mode: Not Supported 00:30:51.861 00:30:51.861 Health Information 00:30:51.861 ================== 00:30:51.861 Critical Warnings: 00:30:51.861 Available Spare Space: OK 00:30:51.861 Temperature: OK 00:30:51.861 Device Reliability: OK 00:30:51.861 Read Only: No 00:30:51.861 Volatile Memory Backup: OK 00:30:51.861 Current Temperature: 323 Kelvin (50 Celsius) 00:30:51.861 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:51.861 Available Spare: 0% 00:30:51.861 Available Spare Threshold: 0% 00:30:51.861 Life Percentage Used: 0% 00:30:51.861 Data Units Read: 8867 00:30:51.861 Data Units Written: 4328 00:30:51.861 Host Read Commands: 296473 00:30:51.861 Host Write Commands: 163320 00:30:51.861 Controller Busy Time: 0 minutes 00:30:51.861 Power Cycles: 0 00:30:51.861 Power On Hours: 0 hours 00:30:51.861 Unsafe Shutdowns: 0 00:30:51.861 Unrecoverable Media Errors: 0 00:30:51.861 Lifetime Error Log Entries: 0 00:30:51.861 Warning Temperature Time: 0 minutes 00:30:51.861 Critical Temperature Time: 0 minutes 00:30:51.861 00:30:51.861 Number of Queues 00:30:51.861 ================ 00:30:51.861 Number of I/O Submission Queues: 64 00:30:51.861 Number of I/O Completion Queues: 64 00:30:51.861 00:30:51.861 ZNS Specific Controller Data 00:30:51.861 ============================ 00:30:51.861 Zone Append Size Limit: 0 00:30:51.861 00:30:51.861 00:30:51.861 Active Namespaces 00:30:51.861 ================= 00:30:51.861 Namespace ID:1 00:30:51.861 Error Recovery Timeout: Unlimited 00:30:51.861 Command Set Identifier: NVM (00h) 00:30:51.861 Deallocate: Supported 00:30:51.861 Deallocated/Unwritten Error: Supported 00:30:51.861 Deallocated Read Value: All 0x00 00:30:51.861 Deallocate in Write Zeroes: Not Supported 00:30:51.861 Deallocated Guard Field: 0xFFFF 00:30:51.861 Flush: Supported 00:30:51.861 Reservation: Not Supported 00:30:51.861 Namespace Sharing Capabilities: Private 00:30:51.861 Size (in LBAs): 1310720 (5GiB) 00:30:51.861 Capacity (in LBAs): 1310720 (5GiB) 00:30:51.861 Utilization (in LBAs): 1310720 (5GiB) 00:30:51.861 Thin Provisioning: Not Supported 00:30:51.861 Per-NS Atomic Units: No 00:30:51.861 Maximum Single Source Range Length: 128 00:30:51.861 Maximum Copy Length: 128 00:30:51.861 Maximum Source Range Count: 128 00:30:51.861 NGUID/EUI64 Never Reused: No 00:30:51.861 Namespace Write Protected: No 00:30:51.861 Number of LBA Formats: 8 00:30:51.861 Current LBA Format: LBA Format #04 00:30:51.861 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:51.861 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:51.861 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:51.862 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:51.862 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:51.862 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:51.862 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:51.862 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:51.862 00:30:51.862 ************************************ 00:30:51.862 END TEST nvme_identify 00:30:51.862 ************************************ 00:30:51.862 00:30:51.862 real 0m0.700s 00:30:51.862 user 0m0.263s 00:30:51.862 sys 0m0.316s 00:30:51.862 10:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.862 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 10:44:45 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:51.862 10:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:51.862 10:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.862 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 ************************************ 00:30:51.862 START TEST nvme_perf 00:30:51.862 ************************************ 00:30:51.862 10:44:45 -- common/autotest_common.sh@1104 -- # nvme_perf 00:30:51.862 10:44:45 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:53.238 Initializing NVMe Controllers 00:30:53.238 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:53.238 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:53.238 Initialization complete. Launching workers. 00:30:53.238 ======================================================== 00:30:53.238 Latency(us) 00:30:53.238 Device Information : IOPS MiB/s Average min max 00:30:53.238 PCIE (0000:00:06.0) NSID 1 from core 0: 54271.95 636.00 2358.32 1331.46 7027.51 00:30:53.238 ======================================================== 00:30:53.238 Total : 54271.95 636.00 2358.32 1331.46 7027.51 00:30:53.238 00:30:53.238 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:53.238 ================================================================================= 00:30:53.238 1.00000% : 1519.244us 00:30:53.238 10.00000% : 1705.425us 00:30:53.238 25.00000% : 1951.185us 00:30:53.238 50.00000% : 2338.444us 00:30:53.238 75.00000% : 2725.702us 00:30:53.238 90.00000% : 2993.804us 00:30:53.238 95.00000% : 3276.800us 00:30:53.238 98.00000% : 3604.480us 00:30:53.238 99.00000% : 3723.636us 00:30:53.238 99.50000% : 3842.793us 00:30:53.238 99.90000% : 5213.091us 00:30:53.238 99.99000% : 6821.702us 00:30:53.238 99.99900% : 7030.225us 00:30:53.239 99.99990% : 7030.225us 00:30:53.239 99.99999% : 7030.225us 00:30:53.239 00:30:53.239 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:53.239 ============================================================================== 00:30:53.239 Range in us Cumulative IO count 00:30:53.239 1325.615 - 1333.062: 0.0018% ( 1) 00:30:53.239 1340.509 - 1347.956: 0.0037% ( 1) 00:30:53.239 1347.956 - 1355.404: 0.0055% ( 1) 00:30:53.239 1362.851 - 1370.298: 0.0166% ( 6) 00:30:53.239 1370.298 - 1377.745: 0.0203% ( 2) 00:30:53.239 1377.745 - 1385.193: 0.0258% ( 3) 00:30:53.239 1385.193 - 1392.640: 0.0332% ( 4) 00:30:53.239 1392.640 - 1400.087: 0.0369% ( 2) 00:30:53.239 1400.087 - 1407.535: 0.0497% ( 7) 00:30:53.239 1407.535 - 1414.982: 0.0608% ( 6) 00:30:53.239 1414.982 - 1422.429: 0.0792% ( 10) 00:30:53.239 1422.429 - 1429.876: 0.0995% ( 11) 00:30:53.239 1429.876 - 1437.324: 0.1235% ( 13) 00:30:53.239 1437.324 - 1444.771: 0.1492% ( 14) 00:30:53.239 1444.771 - 1452.218: 0.2008% ( 28) 00:30:53.239 1452.218 - 1459.665: 0.2414% ( 22) 00:30:53.239 1459.665 - 1467.113: 0.3096% ( 37) 00:30:53.239 1467.113 - 1474.560: 0.3925% ( 45) 00:30:53.239 1474.560 - 1482.007: 0.4957% ( 56) 00:30:53.239 1482.007 - 1489.455: 0.5878% ( 50) 00:30:53.239 1489.455 - 1496.902: 0.7039% ( 63) 00:30:53.239 1496.902 - 1504.349: 0.8310% ( 69) 00:30:53.239 1504.349 - 1511.796: 0.9876% ( 85) 00:30:53.239 1511.796 - 1519.244: 1.1995% ( 115) 00:30:53.239 1519.244 - 1526.691: 1.3617% ( 88) 00:30:53.239 1526.691 - 1534.138: 1.6012% ( 130) 00:30:53.239 1534.138 - 1541.585: 1.8702% ( 146) 00:30:53.239 1541.585 - 1549.033: 2.1190% ( 135) 00:30:53.239 1549.033 - 1556.480: 2.4340% ( 171) 00:30:53.239 1556.480 - 1563.927: 2.7362% ( 164) 00:30:53.239 1563.927 - 1571.375: 3.0550% ( 173) 00:30:53.239 1571.375 - 1578.822: 3.3995% ( 187) 00:30:53.239 1578.822 - 1586.269: 3.7423% ( 186) 00:30:53.239 1586.269 - 1593.716: 4.0960% ( 192) 00:30:53.239 1593.716 - 1601.164: 4.4535% ( 194) 00:30:53.239 1601.164 - 1608.611: 4.8533% ( 217) 00:30:53.239 1608.611 - 1616.058: 5.2384% ( 209) 00:30:53.239 1616.058 - 1623.505: 5.6420% ( 219) 00:30:53.239 1623.505 - 1630.953: 6.0786% ( 237) 00:30:53.239 1630.953 - 1638.400: 6.5116% ( 235) 00:30:53.239 1638.400 - 1645.847: 6.9391% ( 232) 00:30:53.239 1645.847 - 1653.295: 7.3813% ( 240) 00:30:53.239 1653.295 - 1660.742: 7.8199% ( 238) 00:30:53.239 1660.742 - 1668.189: 8.2271% ( 221) 00:30:53.239 1668.189 - 1675.636: 8.6767% ( 244) 00:30:53.239 1675.636 - 1683.084: 9.1263% ( 244) 00:30:53.239 1683.084 - 1690.531: 9.5298% ( 219) 00:30:53.239 1690.531 - 1697.978: 9.9978% ( 254) 00:30:53.239 1697.978 - 1705.425: 10.4787% ( 261) 00:30:53.239 1705.425 - 1712.873: 10.8951% ( 226) 00:30:53.239 1712.873 - 1720.320: 11.3594% ( 252) 00:30:53.239 1720.320 - 1727.767: 11.8238% ( 252) 00:30:53.239 1727.767 - 1735.215: 12.2715% ( 243) 00:30:53.239 1735.215 - 1742.662: 12.7211% ( 244) 00:30:53.239 1742.662 - 1750.109: 13.2112% ( 266) 00:30:53.239 1750.109 - 1757.556: 13.6479% ( 237) 00:30:53.239 1757.556 - 1765.004: 14.1325% ( 263) 00:30:53.239 1765.004 - 1772.451: 14.6245% ( 267) 00:30:53.239 1772.451 - 1779.898: 15.0409% ( 226) 00:30:53.239 1779.898 - 1787.345: 15.5255% ( 263) 00:30:53.239 1787.345 - 1794.793: 16.0175% ( 267) 00:30:53.239 1794.793 - 1802.240: 16.4707% ( 246) 00:30:53.239 1802.240 - 1809.687: 16.9222% ( 245) 00:30:53.239 1809.687 - 1817.135: 17.4141% ( 267) 00:30:53.239 1817.135 - 1824.582: 17.8895% ( 258) 00:30:53.239 1824.582 - 1832.029: 18.3225% ( 235) 00:30:53.239 1832.029 - 1839.476: 18.8329% ( 277) 00:30:53.239 1839.476 - 1846.924: 19.3083% ( 258) 00:30:53.239 1846.924 - 1854.371: 19.7468% ( 238) 00:30:53.239 1854.371 - 1861.818: 20.2462% ( 271) 00:30:53.239 1861.818 - 1869.265: 20.7584% ( 278) 00:30:53.239 1869.265 - 1876.713: 21.1748% ( 226) 00:30:53.239 1876.713 - 1884.160: 21.6668% ( 267) 00:30:53.239 1884.160 - 1891.607: 22.1606% ( 268) 00:30:53.239 1891.607 - 1899.055: 22.5862% ( 231) 00:30:53.239 1899.055 - 1906.502: 23.0506% ( 252) 00:30:53.239 1906.502 - 1921.396: 23.9884% ( 509) 00:30:53.239 1921.396 - 1936.291: 24.9908% ( 544) 00:30:53.239 1936.291 - 1951.185: 25.9194% ( 504) 00:30:53.239 1951.185 - 1966.080: 26.8997% ( 532) 00:30:53.239 1966.080 - 1980.975: 27.8431% ( 512) 00:30:53.239 1980.975 - 1995.869: 28.7957% ( 517) 00:30:53.239 1995.869 - 2010.764: 29.7796% ( 534) 00:30:53.239 2010.764 - 2025.658: 30.7193% ( 510) 00:30:53.239 2025.658 - 2040.553: 31.6627% ( 512) 00:30:53.239 2040.553 - 2055.447: 32.6504% ( 536) 00:30:53.239 2055.447 - 2070.342: 33.5753% ( 502) 00:30:53.239 2070.342 - 2085.236: 34.5353% ( 521) 00:30:53.239 2085.236 - 2100.131: 35.4971% ( 522) 00:30:53.239 2100.131 - 2115.025: 36.4368% ( 510) 00:30:53.239 2115.025 - 2129.920: 37.3913% ( 518) 00:30:53.239 2129.920 - 2144.815: 38.3384% ( 514) 00:30:53.239 2144.815 - 2159.709: 39.2762% ( 509) 00:30:53.239 2159.709 - 2174.604: 40.2436% ( 525) 00:30:53.239 2174.604 - 2189.498: 41.1833% ( 510) 00:30:53.239 2189.498 - 2204.393: 42.1212% ( 509) 00:30:53.239 2204.393 - 2219.287: 43.0996% ( 531) 00:30:53.239 2219.287 - 2234.182: 44.0080% ( 493) 00:30:53.239 2234.182 - 2249.076: 44.9606% ( 517) 00:30:53.239 2249.076 - 2263.971: 45.9113% ( 516) 00:30:53.239 2263.971 - 2278.865: 46.8695% ( 520) 00:30:53.239 2278.865 - 2293.760: 47.8055% ( 508) 00:30:53.239 2293.760 - 2308.655: 48.7636% ( 520) 00:30:53.239 2308.655 - 2323.549: 49.7199% ( 519) 00:30:53.239 2323.549 - 2338.444: 50.6873% ( 525) 00:30:53.239 2338.444 - 2353.338: 51.6638% ( 530) 00:30:53.239 2353.338 - 2368.233: 52.5925% ( 504) 00:30:53.239 2368.233 - 2383.127: 53.5414% ( 515) 00:30:53.239 2383.127 - 2398.022: 54.5051% ( 523) 00:30:53.239 2398.022 - 2412.916: 55.4466% ( 511) 00:30:53.239 2412.916 - 2427.811: 56.3956% ( 515) 00:30:53.239 2427.811 - 2442.705: 57.3500% ( 518) 00:30:53.239 2442.705 - 2457.600: 58.2842% ( 507) 00:30:53.239 2457.600 - 2472.495: 59.2442% ( 521) 00:30:53.239 2472.495 - 2487.389: 60.2281% ( 534) 00:30:53.239 2487.389 - 2502.284: 61.1531% ( 502) 00:30:53.239 2502.284 - 2517.178: 62.1038% ( 516) 00:30:53.239 2517.178 - 2532.073: 63.0694% ( 524) 00:30:53.239 2532.073 - 2546.967: 64.0017% ( 506) 00:30:53.239 2546.967 - 2561.862: 64.9617% ( 521) 00:30:53.239 2561.862 - 2576.756: 65.9290% ( 525) 00:30:53.239 2576.756 - 2591.651: 66.8724% ( 512) 00:30:53.239 2591.651 - 2606.545: 67.8471% ( 529) 00:30:53.239 2606.545 - 2621.440: 68.7905% ( 512) 00:30:53.239 2621.440 - 2636.335: 69.7395% ( 515) 00:30:53.239 2636.335 - 2651.229: 70.7252% ( 535) 00:30:53.239 2651.229 - 2666.124: 71.6465% ( 500) 00:30:53.239 2666.124 - 2681.018: 72.5733% ( 503) 00:30:53.239 2681.018 - 2695.913: 73.5388% ( 524) 00:30:53.239 2695.913 - 2710.807: 74.4675% ( 504) 00:30:53.239 2710.807 - 2725.702: 75.4238% ( 519) 00:30:53.239 2725.702 - 2740.596: 76.3746% ( 516) 00:30:53.239 2740.596 - 2755.491: 77.3143% ( 510) 00:30:53.239 2755.491 - 2770.385: 78.2687% ( 518) 00:30:53.239 2770.385 - 2785.280: 79.2140% ( 513) 00:30:53.239 2785.280 - 2800.175: 80.1574% ( 512) 00:30:53.239 2800.175 - 2815.069: 81.1284% ( 527) 00:30:53.239 2815.069 - 2829.964: 82.0423% ( 496) 00:30:53.239 2829.964 - 2844.858: 82.9525% ( 494) 00:30:53.239 2844.858 - 2859.753: 83.8867% ( 507) 00:30:53.239 2859.753 - 2874.647: 84.7490% ( 468) 00:30:53.239 2874.647 - 2889.542: 85.6261% ( 476) 00:30:53.239 2889.542 - 2904.436: 86.4516% ( 448) 00:30:53.239 2904.436 - 2919.331: 87.2549% ( 436) 00:30:53.239 2919.331 - 2934.225: 88.0141% ( 412) 00:30:53.239 2934.225 - 2949.120: 88.6903% ( 367) 00:30:53.239 2949.120 - 2964.015: 89.3094% ( 336) 00:30:53.239 2964.015 - 2978.909: 89.8953% ( 318) 00:30:53.239 2978.909 - 2993.804: 90.4205% ( 285) 00:30:53.239 2993.804 - 3008.698: 90.8793% ( 249) 00:30:53.239 3008.698 - 3023.593: 91.3381% ( 249) 00:30:53.239 3023.593 - 3038.487: 91.7379% ( 217) 00:30:53.239 3038.487 - 3053.382: 92.1083% ( 201) 00:30:53.239 3053.382 - 3068.276: 92.4344% ( 177) 00:30:53.239 3068.276 - 3083.171: 92.7034% ( 146) 00:30:53.239 3083.171 - 3098.065: 92.9853% ( 153) 00:30:53.239 3098.065 - 3112.960: 93.2212% ( 128) 00:30:53.239 3112.960 - 3127.855: 93.4220% ( 109) 00:30:53.239 3127.855 - 3142.749: 93.6192% ( 107) 00:30:53.239 3142.749 - 3157.644: 93.7887% ( 92) 00:30:53.239 3157.644 - 3172.538: 93.9453% ( 85) 00:30:53.239 3172.538 - 3187.433: 94.1075% ( 88) 00:30:53.239 3187.433 - 3202.327: 94.2567% ( 81) 00:30:53.239 3202.327 - 3217.222: 94.4152% ( 86) 00:30:53.239 3217.222 - 3232.116: 94.5681% ( 83) 00:30:53.239 3232.116 - 3247.011: 94.7173% ( 81) 00:30:53.239 3247.011 - 3261.905: 94.8740% ( 85) 00:30:53.239 3261.905 - 3276.800: 95.0122% ( 75) 00:30:53.239 3276.800 - 3291.695: 95.1651% ( 83) 00:30:53.239 3291.695 - 3306.589: 95.3070% ( 77) 00:30:53.239 3306.589 - 3321.484: 95.4415% ( 73) 00:30:53.239 3321.484 - 3336.378: 95.5797% ( 75) 00:30:53.239 3336.378 - 3351.273: 95.7216% ( 77) 00:30:53.239 3351.273 - 3366.167: 95.8634% ( 77) 00:30:53.239 3366.167 - 3381.062: 96.0035% ( 76) 00:30:53.239 3381.062 - 3395.956: 96.1380% ( 73) 00:30:53.239 3395.956 - 3410.851: 96.2835% ( 79) 00:30:53.239 3410.851 - 3425.745: 96.4291% ( 79) 00:30:53.239 3425.745 - 3440.640: 96.5710% ( 77) 00:30:53.239 3440.640 - 3455.535: 96.6944% ( 67) 00:30:53.239 3455.535 - 3470.429: 96.8381% ( 78) 00:30:53.239 3470.429 - 3485.324: 96.9653% ( 69) 00:30:53.239 3485.324 - 3500.218: 97.1090% ( 78) 00:30:53.239 3500.218 - 3515.113: 97.2454% ( 74) 00:30:53.239 3515.113 - 3530.007: 97.3835% ( 75) 00:30:53.239 3530.007 - 3544.902: 97.5144% ( 71) 00:30:53.239 3544.902 - 3559.796: 97.6562% ( 77) 00:30:53.239 3559.796 - 3574.691: 97.7963% ( 76) 00:30:53.239 3574.691 - 3589.585: 97.9308% ( 73) 00:30:53.240 3589.585 - 3604.480: 98.0671% ( 74) 00:30:53.240 3604.480 - 3619.375: 98.1980% ( 71) 00:30:53.240 3619.375 - 3634.269: 98.3269% ( 70) 00:30:53.240 3634.269 - 3649.164: 98.4596% ( 72) 00:30:53.240 3649.164 - 3664.058: 98.5849% ( 68) 00:30:53.240 3664.058 - 3678.953: 98.7084% ( 67) 00:30:53.240 3678.953 - 3693.847: 98.8171% ( 59) 00:30:53.240 3693.847 - 3708.742: 98.9239% ( 58) 00:30:53.240 3708.742 - 3723.636: 99.0216% ( 53) 00:30:53.240 3723.636 - 3738.531: 99.1156% ( 51) 00:30:53.240 3738.531 - 3753.425: 99.2114% ( 52) 00:30:53.240 3753.425 - 3768.320: 99.2888% ( 42) 00:30:53.240 3768.320 - 3783.215: 99.3514% ( 34) 00:30:53.240 3783.215 - 3798.109: 99.4122% ( 33) 00:30:53.240 3798.109 - 3813.004: 99.4638% ( 28) 00:30:53.240 3813.004 - 3842.793: 99.5449% ( 44) 00:30:53.240 3842.793 - 3872.582: 99.6112% ( 36) 00:30:53.240 3872.582 - 3902.371: 99.6481% ( 20) 00:30:53.240 3902.371 - 3932.160: 99.6739% ( 14) 00:30:53.240 3932.160 - 3961.949: 99.6978% ( 13) 00:30:53.240 3961.949 - 3991.738: 99.7126% ( 8) 00:30:53.240 3991.738 - 4021.527: 99.7255% ( 7) 00:30:53.240 4021.527 - 4051.316: 99.7347% ( 5) 00:30:53.240 4051.316 - 4081.105: 99.7457% ( 6) 00:30:53.240 4081.105 - 4110.895: 99.7586% ( 7) 00:30:53.240 4110.895 - 4140.684: 99.7715% ( 7) 00:30:53.240 4140.684 - 4170.473: 99.7844% ( 7) 00:30:53.240 4170.473 - 4200.262: 99.7936% ( 5) 00:30:53.240 4200.262 - 4230.051: 99.8047% ( 6) 00:30:53.240 4230.051 - 4259.840: 99.8176% ( 7) 00:30:53.240 4259.840 - 4289.629: 99.8286% ( 6) 00:30:53.240 4289.629 - 4319.418: 99.8360% ( 4) 00:30:53.240 4319.418 - 4349.207: 99.8434% ( 4) 00:30:53.240 4349.207 - 4378.996: 99.8508% ( 4) 00:30:53.240 4378.996 - 4408.785: 99.8544% ( 2) 00:30:53.240 4408.785 - 4438.575: 99.8563% ( 1) 00:30:53.240 4438.575 - 4468.364: 99.8581% ( 1) 00:30:53.240 4468.364 - 4498.153: 99.8600% ( 1) 00:30:53.240 4498.153 - 4527.942: 99.8618% ( 1) 00:30:53.240 4527.942 - 4557.731: 99.8636% ( 1) 00:30:53.240 4557.731 - 4587.520: 99.8655% ( 1) 00:30:53.240 4587.520 - 4617.309: 99.8673% ( 1) 00:30:53.240 4617.309 - 4647.098: 99.8692% ( 1) 00:30:53.240 4647.098 - 4676.887: 99.8710% ( 1) 00:30:53.240 4706.676 - 4736.465: 99.8729% ( 1) 00:30:53.240 4736.465 - 4766.255: 99.8747% ( 1) 00:30:53.240 4766.255 - 4796.044: 99.8765% ( 1) 00:30:53.240 4796.044 - 4825.833: 99.8784% ( 1) 00:30:53.240 4825.833 - 4855.622: 99.8802% ( 1) 00:30:53.240 4855.622 - 4885.411: 99.8821% ( 1) 00:30:53.240 4885.411 - 4915.200: 99.8839% ( 1) 00:30:53.240 4915.200 - 4944.989: 99.8858% ( 1) 00:30:53.240 4944.989 - 4974.778: 99.8876% ( 1) 00:30:53.240 4974.778 - 5004.567: 99.8894% ( 1) 00:30:53.240 5004.567 - 5034.356: 99.8913% ( 1) 00:30:53.240 5064.145 - 5093.935: 99.8931% ( 1) 00:30:53.240 5093.935 - 5123.724: 99.8950% ( 1) 00:30:53.240 5123.724 - 5153.513: 99.8968% ( 1) 00:30:53.240 5153.513 - 5183.302: 99.8987% ( 1) 00:30:53.240 5183.302 - 5213.091: 99.9005% ( 1) 00:30:53.240 5213.091 - 5242.880: 99.9023% ( 1) 00:30:53.240 5242.880 - 5272.669: 99.9042% ( 1) 00:30:53.240 5302.458 - 5332.247: 99.9060% ( 1) 00:30:53.240 5332.247 - 5362.036: 99.9079% ( 1) 00:30:53.240 5362.036 - 5391.825: 99.9097% ( 1) 00:30:53.240 5391.825 - 5421.615: 99.9116% ( 1) 00:30:53.240 5421.615 - 5451.404: 99.9134% ( 1) 00:30:53.240 5451.404 - 5481.193: 99.9152% ( 1) 00:30:53.240 5481.193 - 5510.982: 99.9171% ( 1) 00:30:53.240 5510.982 - 5540.771: 99.9189% ( 1) 00:30:53.240 5540.771 - 5570.560: 99.9208% ( 1) 00:30:53.240 5570.560 - 5600.349: 99.9226% ( 1) 00:30:53.240 5600.349 - 5630.138: 99.9245% ( 1) 00:30:53.240 5659.927 - 5689.716: 99.9263% ( 1) 00:30:53.240 5689.716 - 5719.505: 99.9281% ( 1) 00:30:53.240 5719.505 - 5749.295: 99.9300% ( 1) 00:30:53.240 5749.295 - 5779.084: 99.9318% ( 1) 00:30:53.240 5779.084 - 5808.873: 99.9337% ( 1) 00:30:53.240 5808.873 - 5838.662: 99.9355% ( 1) 00:30:53.240 5838.662 - 5868.451: 99.9374% ( 1) 00:30:53.240 5868.451 - 5898.240: 99.9392% ( 1) 00:30:53.240 5898.240 - 5928.029: 99.9410% ( 1) 00:30:53.240 5928.029 - 5957.818: 99.9429% ( 1) 00:30:53.240 5957.818 - 5987.607: 99.9447% ( 1) 00:30:53.240 5987.607 - 6017.396: 99.9466% ( 1) 00:30:53.240 6047.185 - 6076.975: 99.9484% ( 1) 00:30:53.240 6076.975 - 6106.764: 99.9503% ( 1) 00:30:53.240 6106.764 - 6136.553: 99.9521% ( 1) 00:30:53.240 6136.553 - 6166.342: 99.9539% ( 1) 00:30:53.240 6166.342 - 6196.131: 99.9558% ( 1) 00:30:53.240 6196.131 - 6225.920: 99.9576% ( 1) 00:30:53.240 6225.920 - 6255.709: 99.9595% ( 1) 00:30:53.240 6255.709 - 6285.498: 99.9613% ( 1) 00:30:53.240 6285.498 - 6315.287: 99.9631% ( 1) 00:30:53.240 6315.287 - 6345.076: 99.9650% ( 1) 00:30:53.240 6345.076 - 6374.865: 99.9668% ( 1) 00:30:53.240 6404.655 - 6434.444: 99.9687% ( 1) 00:30:53.240 6434.444 - 6464.233: 99.9705% ( 1) 00:30:53.240 6464.233 - 6494.022: 99.9724% ( 1) 00:30:53.240 6494.022 - 6523.811: 99.9742% ( 1) 00:30:53.240 6523.811 - 6553.600: 99.9760% ( 1) 00:30:53.240 6583.389 - 6613.178: 99.9779% ( 1) 00:30:53.240 6613.178 - 6642.967: 99.9797% ( 1) 00:30:53.240 6642.967 - 6672.756: 99.9816% ( 1) 00:30:53.240 6672.756 - 6702.545: 99.9834% ( 1) 00:30:53.240 6702.545 - 6732.335: 99.9853% ( 1) 00:30:53.240 6732.335 - 6762.124: 99.9871% ( 1) 00:30:53.240 6762.124 - 6791.913: 99.9889% ( 1) 00:30:53.240 6791.913 - 6821.702: 99.9908% ( 1) 00:30:53.240 6821.702 - 6851.491: 99.9926% ( 1) 00:30:53.240 6851.491 - 6881.280: 99.9945% ( 1) 00:30:53.240 6911.069 - 6940.858: 99.9963% ( 1) 00:30:53.240 6940.858 - 6970.647: 99.9982% ( 1) 00:30:53.240 7000.436 - 7030.225: 100.0000% ( 1) 00:30:53.240 00:30:53.240 10:44:46 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:54.617 Initializing NVMe Controllers 00:30:54.617 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:54.617 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:54.617 Initialization complete. Launching workers. 00:30:54.617 ======================================================== 00:30:54.617 Latency(us) 00:30:54.617 Device Information : IOPS MiB/s Average min max 00:30:54.617 PCIE (0000:00:06.0) NSID 1 from core 0: 57387.45 672.51 2233.61 959.28 11539.84 00:30:54.617 ======================================================== 00:30:54.617 Total : 57387.45 672.51 2233.61 959.28 11539.84 00:30:54.617 00:30:54.617 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:54.617 ================================================================================= 00:30:54.617 1.00000% : 1541.585us 00:30:54.617 10.00000% : 1854.371us 00:30:54.617 25.00000% : 1995.869us 00:30:54.617 50.00000% : 2174.604us 00:30:54.617 75.00000% : 2412.916us 00:30:54.617 90.00000% : 2695.913us 00:30:54.617 95.00000% : 2949.120us 00:30:54.617 98.00000% : 3157.644us 00:30:54.617 99.00000% : 3276.800us 00:30:54.617 99.50000% : 3410.851us 00:30:54.617 99.90000% : 4349.207us 00:30:54.617 99.99000% : 11439.011us 00:30:54.617 99.99900% : 11558.167us 00:30:54.617 99.99990% : 11558.167us 00:30:54.617 99.99999% : 11558.167us 00:30:54.617 00:30:54.617 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:54.617 ============================================================================== 00:30:54.617 Range in us Cumulative IO count 00:30:54.617 953.251 - 960.698: 0.0017% ( 1) 00:30:54.617 1079.855 - 1087.302: 0.0035% ( 1) 00:30:54.617 1131.985 - 1139.433: 0.0052% ( 1) 00:30:54.617 1146.880 - 1154.327: 0.0070% ( 1) 00:30:54.617 1161.775 - 1169.222: 0.0087% ( 1) 00:30:54.617 1169.222 - 1176.669: 0.0122% ( 2) 00:30:54.617 1184.116 - 1191.564: 0.0139% ( 1) 00:30:54.617 1199.011 - 1206.458: 0.0157% ( 1) 00:30:54.617 1206.458 - 1213.905: 0.0174% ( 1) 00:30:54.617 1213.905 - 1221.353: 0.0209% ( 2) 00:30:54.617 1221.353 - 1228.800: 0.0226% ( 1) 00:30:54.617 1236.247 - 1243.695: 0.0279% ( 3) 00:30:54.617 1251.142 - 1258.589: 0.0313% ( 2) 00:30:54.617 1258.589 - 1266.036: 0.0348% ( 2) 00:30:54.617 1266.036 - 1273.484: 0.0401% ( 3) 00:30:54.617 1273.484 - 1280.931: 0.0470% ( 4) 00:30:54.617 1280.931 - 1288.378: 0.0488% ( 1) 00:30:54.617 1288.378 - 1295.825: 0.0575% ( 5) 00:30:54.617 1295.825 - 1303.273: 0.0610% ( 2) 00:30:54.617 1303.273 - 1310.720: 0.0679% ( 4) 00:30:54.617 1310.720 - 1318.167: 0.0819% ( 8) 00:30:54.617 1318.167 - 1325.615: 0.0993% ( 10) 00:30:54.617 1325.615 - 1333.062: 0.1132% ( 8) 00:30:54.617 1333.062 - 1340.509: 0.1167% ( 2) 00:30:54.617 1340.509 - 1347.956: 0.1341% ( 10) 00:30:54.617 1347.956 - 1355.404: 0.1480% ( 8) 00:30:54.617 1355.404 - 1362.851: 0.1672% ( 11) 00:30:54.617 1362.851 - 1370.298: 0.1881% ( 12) 00:30:54.617 1370.298 - 1377.745: 0.2177% ( 17) 00:30:54.617 1377.745 - 1385.193: 0.2316% ( 8) 00:30:54.617 1385.193 - 1392.640: 0.2351% ( 2) 00:30:54.617 1392.640 - 1400.087: 0.2508% ( 9) 00:30:54.617 1400.087 - 1407.535: 0.2630% ( 7) 00:30:54.617 1407.535 - 1414.982: 0.2839% ( 12) 00:30:54.617 1414.982 - 1422.429: 0.2961% ( 7) 00:30:54.617 1422.429 - 1429.876: 0.3100% ( 8) 00:30:54.617 1429.876 - 1437.324: 0.3239% ( 8) 00:30:54.617 1437.324 - 1444.771: 0.3431% ( 11) 00:30:54.617 1444.771 - 1452.218: 0.3640% ( 12) 00:30:54.617 1452.218 - 1459.665: 0.3936% ( 17) 00:30:54.617 1459.665 - 1467.113: 0.4337% ( 23) 00:30:54.617 1467.113 - 1474.560: 0.5190% ( 49) 00:30:54.617 1474.560 - 1482.007: 0.5451% ( 15) 00:30:54.617 1482.007 - 1489.455: 0.5747% ( 17) 00:30:54.617 1489.455 - 1496.902: 0.5991% ( 14) 00:30:54.617 1496.902 - 1504.349: 0.6235% ( 14) 00:30:54.617 1504.349 - 1511.796: 0.6862% ( 36) 00:30:54.617 1511.796 - 1519.244: 0.7576% ( 41) 00:30:54.617 1519.244 - 1526.691: 0.8586% ( 58) 00:30:54.617 1526.691 - 1534.138: 0.8987% ( 23) 00:30:54.617 1534.138 - 1541.585: 1.0223% ( 71) 00:30:54.617 1541.585 - 1549.033: 1.0763% ( 31) 00:30:54.617 1549.033 - 1556.480: 1.1373% ( 35) 00:30:54.617 1556.480 - 1563.927: 1.1983% ( 35) 00:30:54.617 1563.927 - 1571.375: 1.3045% ( 61) 00:30:54.617 1571.375 - 1578.822: 1.4177% ( 65) 00:30:54.617 1578.822 - 1586.269: 1.5117% ( 54) 00:30:54.617 1586.269 - 1593.716: 1.6842% ( 99) 00:30:54.617 1593.716 - 1601.164: 1.7556% ( 41) 00:30:54.617 1601.164 - 1608.611: 1.8392% ( 48) 00:30:54.617 1608.611 - 1616.058: 1.9785% ( 80) 00:30:54.617 1616.058 - 1623.505: 2.0360% ( 33) 00:30:54.617 1623.505 - 1630.953: 2.1265% ( 52) 00:30:54.617 1630.953 - 1638.400: 2.1840% ( 33) 00:30:54.617 1638.400 - 1645.847: 2.2798% ( 55) 00:30:54.617 1645.847 - 1653.295: 2.3547% ( 43) 00:30:54.617 1653.295 - 1660.742: 2.4435% ( 51) 00:30:54.617 1660.742 - 1668.189: 2.5672% ( 71) 00:30:54.617 1668.189 - 1675.636: 2.6943% ( 73) 00:30:54.617 1675.636 - 1683.084: 2.7936% ( 57) 00:30:54.617 1683.084 - 1690.531: 2.9138% ( 69) 00:30:54.617 1690.531 - 1697.978: 3.0618% ( 85) 00:30:54.617 1697.978 - 1705.425: 3.1907% ( 74) 00:30:54.617 1705.425 - 1712.873: 3.4154% ( 129) 00:30:54.617 1712.873 - 1720.320: 3.5442% ( 74) 00:30:54.617 1720.320 - 1727.767: 3.7323% ( 108) 00:30:54.617 1727.767 - 1735.215: 3.9187% ( 107) 00:30:54.617 1735.215 - 1742.662: 4.1782% ( 149) 00:30:54.617 1742.662 - 1750.109: 4.4133% ( 135) 00:30:54.617 1750.109 - 1757.556: 4.7216% ( 177) 00:30:54.617 1757.556 - 1765.004: 4.9846% ( 151) 00:30:54.617 1765.004 - 1772.451: 5.2702% ( 164) 00:30:54.617 1772.451 - 1779.898: 5.8641% ( 341) 00:30:54.617 1779.898 - 1787.345: 6.2368% ( 214) 00:30:54.617 1787.345 - 1794.793: 6.6043% ( 211) 00:30:54.617 1794.793 - 1802.240: 6.8917% ( 165) 00:30:54.617 1802.240 - 1809.687: 7.1912% ( 172) 00:30:54.617 1809.687 - 1817.135: 7.6110% ( 241) 00:30:54.617 1817.135 - 1824.582: 7.9959% ( 221) 00:30:54.617 1824.582 - 1832.029: 8.4261% ( 247) 00:30:54.617 1832.029 - 1839.476: 9.0304% ( 347) 00:30:54.617 1839.476 - 1846.924: 9.5930% ( 323) 00:30:54.617 1846.924 - 1854.371: 10.1973% ( 347) 00:30:54.617 1854.371 - 1861.818: 10.9637% ( 440) 00:30:54.617 1861.818 - 1869.265: 11.5854% ( 357) 00:30:54.617 1869.265 - 1876.713: 12.1741% ( 338) 00:30:54.617 1876.713 - 1884.160: 12.7802% ( 348) 00:30:54.617 1884.160 - 1891.607: 13.5604% ( 448) 00:30:54.617 1891.607 - 1899.055: 14.1369% ( 331) 00:30:54.617 1899.055 - 1906.502: 14.8458% ( 407) 00:30:54.617 1906.502 - 1921.396: 16.3471% ( 862) 00:30:54.617 1921.396 - 1936.291: 17.9302% ( 909) 00:30:54.617 1936.291 - 1951.185: 19.4733% ( 886) 00:30:54.617 1951.185 - 1966.080: 21.1732% ( 976) 00:30:54.617 1966.080 - 1980.975: 23.0942% ( 1103) 00:30:54.617 1980.975 - 1995.869: 25.1859% ( 1201) 00:30:54.617 1995.869 - 2010.764: 27.1470% ( 1126) 00:30:54.617 2010.764 - 2025.658: 29.1795% ( 1167) 00:30:54.617 2025.658 - 2040.553: 31.2608% ( 1195) 00:30:54.617 2040.553 - 2055.447: 33.5371% ( 1307) 00:30:54.617 2055.447 - 2070.342: 35.5191% ( 1138) 00:30:54.617 2070.342 - 2085.236: 37.7467% ( 1279) 00:30:54.617 2085.236 - 2100.131: 40.3086% ( 1471) 00:30:54.617 2100.131 - 2115.025: 42.5066% ( 1262) 00:30:54.617 2115.025 - 2129.920: 44.5460% ( 1171) 00:30:54.617 2129.920 - 2144.815: 46.4967% ( 1120) 00:30:54.617 2144.815 - 2159.709: 48.3846% ( 1084) 00:30:54.617 2159.709 - 2174.604: 50.2308% ( 1060) 00:30:54.617 2174.604 - 2189.498: 52.4409% ( 1269) 00:30:54.617 2189.498 - 2204.393: 54.1495% ( 981) 00:30:54.617 2204.393 - 2219.287: 56.0461% ( 1089) 00:30:54.617 2219.287 - 2234.182: 57.8818% ( 1054) 00:30:54.617 2234.182 - 2249.076: 59.5590% ( 963) 00:30:54.617 2249.076 - 2263.971: 61.2136% ( 950) 00:30:54.617 2263.971 - 2278.865: 62.7027% ( 855) 00:30:54.617 2278.865 - 2293.760: 64.1308% ( 820) 00:30:54.617 2293.760 - 2308.655: 65.7593% ( 935) 00:30:54.617 2308.655 - 2323.549: 67.1752% ( 813) 00:30:54.617 2323.549 - 2338.444: 68.5999% ( 818) 00:30:54.617 2338.444 - 2353.338: 70.0246% ( 818) 00:30:54.617 2353.338 - 2368.233: 71.4196% ( 801) 00:30:54.617 2368.233 - 2383.127: 72.9871% ( 900) 00:30:54.617 2383.127 - 2398.022: 74.5302% ( 886) 00:30:54.617 2398.022 - 2412.916: 76.0472% ( 871) 00:30:54.617 2412.916 - 2427.811: 77.3534% ( 750) 00:30:54.617 2427.811 - 2442.705: 78.6474% ( 743) 00:30:54.617 2442.705 - 2457.600: 79.7168% ( 614) 00:30:54.617 2457.600 - 2472.495: 80.8367% ( 643) 00:30:54.617 2472.495 - 2487.389: 81.8660% ( 591) 00:30:54.617 2487.389 - 2502.284: 82.8640% ( 573) 00:30:54.617 2502.284 - 2517.178: 83.7731% ( 522) 00:30:54.618 2517.178 - 2532.073: 84.6230% ( 488) 00:30:54.618 2532.073 - 2546.967: 85.3214% ( 401) 00:30:54.618 2546.967 - 2561.862: 85.9258% ( 347) 00:30:54.618 2561.862 - 2576.756: 86.5092% ( 335) 00:30:54.618 2576.756 - 2591.651: 87.0787% ( 327) 00:30:54.618 2591.651 - 2606.545: 87.5873% ( 292) 00:30:54.618 2606.545 - 2621.440: 88.0784% ( 282) 00:30:54.618 2621.440 - 2636.335: 88.5557% ( 274) 00:30:54.618 2636.335 - 2651.229: 88.9945% ( 252) 00:30:54.618 2651.229 - 2666.124: 89.4091% ( 238) 00:30:54.618 2666.124 - 2681.018: 89.7870% ( 217) 00:30:54.618 2681.018 - 2695.913: 90.1580% ( 213) 00:30:54.618 2695.913 - 2710.807: 90.5376% ( 218) 00:30:54.618 2710.807 - 2725.702: 90.8720% ( 192) 00:30:54.618 2725.702 - 2740.596: 91.1942% ( 185) 00:30:54.618 2740.596 - 2755.491: 91.5025% ( 177) 00:30:54.618 2755.491 - 2770.385: 91.8317% ( 189) 00:30:54.618 2770.385 - 2785.280: 92.1104% ( 160) 00:30:54.618 2785.280 - 2800.175: 92.3925% ( 162) 00:30:54.618 2800.175 - 2815.069: 92.6746% ( 162) 00:30:54.618 2815.069 - 2829.964: 92.9411% ( 153) 00:30:54.618 2829.964 - 2844.858: 93.2267% ( 164) 00:30:54.618 2844.858 - 2859.753: 93.4741% ( 142) 00:30:54.618 2859.753 - 2874.647: 93.7458% ( 156) 00:30:54.618 2874.647 - 2889.542: 93.9844% ( 137) 00:30:54.618 2889.542 - 2904.436: 94.2456% ( 150) 00:30:54.618 2904.436 - 2919.331: 94.5069% ( 150) 00:30:54.618 2919.331 - 2934.225: 94.7402% ( 134) 00:30:54.618 2934.225 - 2949.120: 95.0032% ( 151) 00:30:54.618 2949.120 - 2964.015: 95.2471% ( 140) 00:30:54.618 2964.015 - 2978.909: 95.5066% ( 149) 00:30:54.618 2978.909 - 2993.804: 95.7330% ( 130) 00:30:54.618 2993.804 - 3008.698: 95.9751% ( 139) 00:30:54.618 3008.698 - 3023.593: 96.1997% ( 129) 00:30:54.618 3023.593 - 3038.487: 96.4157% ( 124) 00:30:54.618 3038.487 - 3053.382: 96.6630% ( 142) 00:30:54.618 3053.382 - 3068.276: 96.8650% ( 116) 00:30:54.618 3068.276 - 3083.171: 97.0880% ( 128) 00:30:54.618 3083.171 - 3098.065: 97.2917% ( 117) 00:30:54.618 3098.065 - 3112.960: 97.4833% ( 110) 00:30:54.618 3112.960 - 3127.855: 97.6662% ( 105) 00:30:54.618 3127.855 - 3142.749: 97.8595% ( 111) 00:30:54.618 3142.749 - 3157.644: 98.0372% ( 102) 00:30:54.618 3157.644 - 3172.538: 98.2078% ( 98) 00:30:54.618 3172.538 - 3187.433: 98.3646% ( 90) 00:30:54.618 3187.433 - 3202.327: 98.4952% ( 75) 00:30:54.618 3202.327 - 3217.222: 98.6311% ( 78) 00:30:54.618 3217.222 - 3232.116: 98.7321% ( 58) 00:30:54.618 3232.116 - 3247.011: 98.8453% ( 65) 00:30:54.618 3247.011 - 3261.905: 98.9480% ( 59) 00:30:54.618 3261.905 - 3276.800: 99.0334% ( 49) 00:30:54.618 3276.800 - 3291.695: 99.1100% ( 44) 00:30:54.618 3291.695 - 3306.589: 99.1867% ( 44) 00:30:54.618 3306.589 - 3321.484: 99.2494% ( 36) 00:30:54.618 3321.484 - 3336.378: 99.3103% ( 35) 00:30:54.618 3336.378 - 3351.273: 99.3660% ( 32) 00:30:54.618 3351.273 - 3366.167: 99.4096% ( 25) 00:30:54.618 3366.167 - 3381.062: 99.4601% ( 29) 00:30:54.618 3381.062 - 3395.956: 99.4984% ( 22) 00:30:54.618 3395.956 - 3410.851: 99.5385% ( 23) 00:30:54.618 3410.851 - 3425.745: 99.5681% ( 17) 00:30:54.618 3425.745 - 3440.640: 99.5890% ( 12) 00:30:54.618 3440.640 - 3455.535: 99.6116% ( 13) 00:30:54.618 3455.535 - 3470.429: 99.6377% ( 15) 00:30:54.618 3470.429 - 3485.324: 99.6499% ( 7) 00:30:54.618 3485.324 - 3500.218: 99.6604% ( 6) 00:30:54.618 3500.218 - 3515.113: 99.6726% ( 7) 00:30:54.618 3515.113 - 3530.007: 99.6795% ( 4) 00:30:54.618 3530.007 - 3544.902: 99.6865% ( 4) 00:30:54.618 3544.902 - 3559.796: 99.6952% ( 5) 00:30:54.618 3559.796 - 3574.691: 99.7039% ( 5) 00:30:54.618 3574.691 - 3589.585: 99.7161% ( 7) 00:30:54.618 3589.585 - 3604.480: 99.7248% ( 5) 00:30:54.618 3604.480 - 3619.375: 99.7335% ( 5) 00:30:54.618 3619.375 - 3634.269: 99.7370% ( 2) 00:30:54.618 3634.269 - 3649.164: 99.7457% ( 5) 00:30:54.618 3649.164 - 3664.058: 99.7544% ( 5) 00:30:54.618 3664.058 - 3678.953: 99.7614% ( 4) 00:30:54.618 3678.953 - 3693.847: 99.7666% ( 3) 00:30:54.618 3693.847 - 3708.742: 99.7684% ( 1) 00:30:54.618 3708.742 - 3723.636: 99.7736% ( 3) 00:30:54.618 3723.636 - 3738.531: 99.7788% ( 3) 00:30:54.618 3738.531 - 3753.425: 99.7823% ( 2) 00:30:54.618 3753.425 - 3768.320: 99.7840% ( 1) 00:30:54.618 3768.320 - 3783.215: 99.7910% ( 4) 00:30:54.618 3783.215 - 3798.109: 99.7945% ( 2) 00:30:54.618 3798.109 - 3813.004: 99.7962% ( 1) 00:30:54.618 3813.004 - 3842.793: 99.8032% ( 4) 00:30:54.618 3842.793 - 3872.582: 99.8084% ( 3) 00:30:54.618 3872.582 - 3902.371: 99.8136% ( 3) 00:30:54.618 3902.371 - 3932.160: 99.8171% ( 2) 00:30:54.618 3932.160 - 3961.949: 99.8241% ( 4) 00:30:54.618 3961.949 - 3991.738: 99.8311% ( 4) 00:30:54.618 3991.738 - 4021.527: 99.8415% ( 6) 00:30:54.618 4021.527 - 4051.316: 99.8450% ( 2) 00:30:54.618 4051.316 - 4081.105: 99.8537% ( 5) 00:30:54.618 4081.105 - 4110.895: 99.8624% ( 5) 00:30:54.618 4110.895 - 4140.684: 99.8659% ( 2) 00:30:54.618 4140.684 - 4170.473: 99.8676% ( 1) 00:30:54.618 4170.473 - 4200.262: 99.8746% ( 4) 00:30:54.618 4200.262 - 4230.051: 99.8798% ( 3) 00:30:54.618 4230.051 - 4259.840: 99.8851% ( 3) 00:30:54.618 4259.840 - 4289.629: 99.8955% ( 6) 00:30:54.618 4289.629 - 4319.418: 99.8990% ( 2) 00:30:54.618 4319.418 - 4349.207: 99.9007% ( 1) 00:30:54.618 4378.996 - 4408.785: 99.9025% ( 1) 00:30:54.618 4408.785 - 4438.575: 99.9042% ( 1) 00:30:54.618 4498.153 - 4527.942: 99.9060% ( 1) 00:30:54.618 4647.098 - 4676.887: 99.9077% ( 1) 00:30:54.618 4974.778 - 5004.567: 99.9094% ( 1) 00:30:54.618 5362.036 - 5391.825: 99.9112% ( 1) 00:30:54.618 5689.716 - 5719.505: 99.9147% ( 2) 00:30:54.618 5719.505 - 5749.295: 99.9164% ( 1) 00:30:54.618 5749.295 - 5779.084: 99.9181% ( 1) 00:30:54.618 5779.084 - 5808.873: 99.9199% ( 1) 00:30:54.618 5808.873 - 5838.662: 99.9216% ( 1) 00:30:54.618 5868.451 - 5898.240: 99.9234% ( 1) 00:30:54.618 6672.756 - 6702.545: 99.9251% ( 1) 00:30:54.618 6702.545 - 6732.335: 99.9269% ( 1) 00:30:54.618 7298.327 - 7328.116: 99.9286% ( 1) 00:30:54.618 7328.116 - 7357.905: 99.9303% ( 1) 00:30:54.618 7387.695 - 7417.484: 99.9321% ( 1) 00:30:54.618 7417.484 - 7447.273: 99.9338% ( 1) 00:30:54.618 8400.524 - 8460.102: 99.9356% ( 1) 00:30:54.618 8936.727 - 8996.305: 99.9373% ( 1) 00:30:54.618 8996.305 - 9055.884: 99.9408% ( 2) 00:30:54.618 9770.822 - 9830.400: 99.9425% ( 1) 00:30:54.618 9949.556 - 10009.135: 99.9478% ( 3) 00:30:54.618 10009.135 - 10068.713: 99.9495% ( 1) 00:30:54.618 10068.713 - 10128.291: 99.9547% ( 3) 00:30:54.618 10187.869 - 10247.447: 99.9565% ( 1) 00:30:54.618 10307.025 - 10366.604: 99.9582% ( 1) 00:30:54.618 10366.604 - 10426.182: 99.9599% ( 1) 00:30:54.618 10426.182 - 10485.760: 99.9617% ( 1) 00:30:54.618 10545.338 - 10604.916: 99.9634% ( 1) 00:30:54.618 10664.495 - 10724.073: 99.9652% ( 1) 00:30:54.618 11081.542 - 11141.120: 99.9687% ( 2) 00:30:54.618 11141.120 - 11200.698: 99.9739% ( 3) 00:30:54.618 11200.698 - 11260.276: 99.9774% ( 2) 00:30:54.618 11260.276 - 11319.855: 99.9826% ( 3) 00:30:54.618 11319.855 - 11379.433: 99.9878% ( 3) 00:30:54.618 11379.433 - 11439.011: 99.9913% ( 2) 00:30:54.618 11439.011 - 11498.589: 99.9965% ( 3) 00:30:54.618 11498.589 - 11558.167: 100.0000% ( 2) 00:30:54.618 00:30:54.618 ************************************ 00:30:54.618 END TEST nvme_perf 00:30:54.618 ************************************ 00:30:54.618 10:44:48 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:54.618 00:30:54.618 real 0m2.680s 00:30:54.618 user 0m2.260s 00:30:54.618 sys 0m0.257s 00:30:54.618 10:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.618 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.618 10:44:48 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:54.618 10:44:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:54.618 10:44:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:54.618 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.618 ************************************ 00:30:54.618 START TEST nvme_hello_world 00:30:54.618 ************************************ 00:30:54.618 10:44:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:54.877 Initializing NVMe Controllers 00:30:54.877 Attached to 0000:00:06.0 00:30:54.877 Namespace ID: 1 size: 5GB 00:30:54.877 Initialization complete. 00:30:54.877 INFO: using host memory buffer for IO 00:30:54.877 Hello world! 00:30:54.877 ************************************ 00:30:54.877 END TEST nvme_hello_world 00:30:54.877 ************************************ 00:30:54.877 00:30:54.877 real 0m0.308s 00:30:54.877 user 0m0.113s 00:30:54.877 sys 0m0.121s 00:30:54.877 10:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.877 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.877 10:44:48 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:54.877 10:44:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:54.877 10:44:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:54.877 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.877 ************************************ 00:30:54.877 START TEST nvme_sgl 00:30:54.877 ************************************ 00:30:54.877 10:44:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:55.136 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:55.136 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:55.136 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:55.136 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:55.136 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:55.136 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:55.136 NVMe Readv/Writev Request test 00:30:55.136 Attached to 0000:00:06.0 00:30:55.136 0000:00:06.0: build_io_request_2 test passed 00:30:55.136 0000:00:06.0: build_io_request_4 test passed 00:30:55.136 0000:00:06.0: build_io_request_5 test passed 00:30:55.136 0000:00:06.0: build_io_request_6 test passed 00:30:55.136 0000:00:06.0: build_io_request_7 test passed 00:30:55.136 0000:00:06.0: build_io_request_10 test passed 00:30:55.136 Cleaning up... 00:30:55.136 ************************************ 00:30:55.136 END TEST nvme_sgl 00:30:55.136 ************************************ 00:30:55.136 00:30:55.136 real 0m0.377s 00:30:55.136 user 0m0.186s 00:30:55.136 sys 0m0.118s 00:30:55.136 10:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.136 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.395 10:44:49 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:55.395 10:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.395 10:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.395 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.395 ************************************ 00:30:55.395 START TEST nvme_e2edp 00:30:55.395 ************************************ 00:30:55.395 10:44:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:55.654 NVMe Write/Read with End-to-End data protection test 00:30:55.654 Attached to 0000:00:06.0 00:30:55.654 Cleaning up... 00:30:55.654 ************************************ 00:30:55.654 END TEST nvme_e2edp 00:30:55.654 ************************************ 00:30:55.654 00:30:55.654 real 0m0.287s 00:30:55.654 user 0m0.123s 00:30:55.654 sys 0m0.098s 00:30:55.654 10:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.654 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.654 10:44:49 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:55.654 10:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.654 10:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.654 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.654 ************************************ 00:30:55.654 START TEST nvme_reserve 00:30:55.654 ************************************ 00:30:55.654 10:44:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:55.913 ===================================================== 00:30:55.913 NVMe Controller at PCI bus 0, device 6, function 0 00:30:55.913 ===================================================== 00:30:55.913 Reservations: Not Supported 00:30:55.913 Reservation test passed 00:30:55.913 ************************************ 00:30:55.913 END TEST nvme_reserve 00:30:55.913 ************************************ 00:30:55.913 00:30:55.913 real 0m0.281s 00:30:55.913 user 0m0.074s 00:30:55.913 sys 0m0.144s 00:30:55.913 10:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.913 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.913 10:44:49 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:55.913 10:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.913 10:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.913 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.913 ************************************ 00:30:55.913 START TEST nvme_err_injection 00:30:55.913 ************************************ 00:30:55.913 10:44:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:56.172 NVMe Error Injection test 00:30:56.172 Attached to 0000:00:06.0 00:30:56.172 0000:00:06.0: get features failed as expected 00:30:56.172 0000:00:06.0: get features successfully as expected 00:30:56.172 0000:00:06.0: read failed as expected 00:30:56.172 0000:00:06.0: read successfully as expected 00:30:56.172 Cleaning up... 00:30:56.172 ************************************ 00:30:56.172 END TEST nvme_err_injection 00:30:56.172 ************************************ 00:30:56.172 00:30:56.172 real 0m0.311s 00:30:56.172 user 0m0.123s 00:30:56.172 sys 0m0.117s 00:30:56.172 10:44:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.172 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.430 10:44:50 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:56.430 10:44:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:56.430 10:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.430 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.430 ************************************ 00:30:56.430 START TEST nvme_overhead 00:30:56.430 ************************************ 00:30:56.430 10:44:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:57.809 Initializing NVMe Controllers 00:30:57.809 Attached to 0000:00:06.0 00:30:57.809 Initialization complete. Launching workers. 00:30:57.809 submit (in ns) avg, min, max = 13982.4, 10280.9, 96670.9 00:30:57.809 complete (in ns) avg, min, max = 10300.0, 7253.6, 98293.6 00:30:57.809 00:30:57.809 Submit histogram 00:30:57.809 ================ 00:30:57.809 Range in us Cumulative Count 00:30:57.809 10.240 - 10.298: 0.0128% ( 1) 00:30:57.809 10.589 - 10.647: 0.1151% ( 8) 00:30:57.809 10.647 - 10.705: 0.5500% ( 34) 00:30:57.809 10.705 - 10.764: 2.0210% ( 115) 00:30:57.809 10.764 - 10.822: 4.4513% ( 190) 00:30:57.809 10.822 - 10.880: 6.3827% ( 151) 00:30:57.809 10.880 - 10.938: 7.8537% ( 115) 00:30:57.809 10.938 - 10.996: 8.6979% ( 66) 00:30:57.809 10.996 - 11.055: 9.1456% ( 35) 00:30:57.809 11.055 - 11.113: 9.6956% ( 43) 00:30:57.809 11.113 - 11.171: 12.4584% ( 216) 00:30:57.809 11.171 - 11.229: 19.4551% ( 547) 00:30:57.809 11.229 - 11.287: 29.5856% ( 792) 00:30:57.809 11.287 - 11.345: 37.2346% ( 598) 00:30:57.809 11.345 - 11.404: 42.9650% ( 448) 00:30:57.809 11.404 - 11.462: 48.5802% ( 439) 00:30:57.809 11.462 - 11.520: 56.0246% ( 582) 00:30:57.809 11.520 - 11.578: 62.5735% ( 512) 00:30:57.809 11.578 - 11.636: 66.7178% ( 324) 00:30:57.809 11.636 - 11.695: 69.0714% ( 184) 00:30:57.809 11.695 - 11.753: 71.6168% ( 199) 00:30:57.809 11.753 - 11.811: 73.8424% ( 174) 00:30:57.809 11.811 - 11.869: 75.4029% ( 122) 00:30:57.809 11.869 - 11.927: 76.4262% ( 80) 00:30:57.809 11.927 - 11.985: 77.1809% ( 59) 00:30:57.809 11.985 - 12.044: 77.9483% ( 60) 00:30:57.809 12.044 - 12.102: 78.4600% ( 40) 00:30:57.809 12.102 - 12.160: 79.0228% ( 44) 00:30:57.809 12.160 - 12.218: 79.4449% ( 33) 00:30:57.809 12.218 - 12.276: 79.6879% ( 19) 00:30:57.809 12.276 - 12.335: 79.8926% ( 16) 00:30:57.809 12.335 - 12.393: 80.0460% ( 12) 00:30:57.809 12.393 - 12.451: 80.2891% ( 19) 00:30:57.809 12.451 - 12.509: 80.4937% ( 16) 00:30:57.809 12.509 - 12.567: 80.6216% ( 10) 00:30:57.809 12.567 - 12.625: 80.7751% ( 12) 00:30:57.809 12.625 - 12.684: 80.9158% ( 11) 00:30:57.809 12.684 - 12.742: 81.0437% ( 10) 00:30:57.809 12.742 - 12.800: 81.1333% ( 7) 00:30:57.809 12.800 - 12.858: 81.2100% ( 6) 00:30:57.809 12.858 - 12.916: 81.2740% ( 5) 00:30:57.809 12.916 - 12.975: 81.4019% ( 10) 00:30:57.809 12.975 - 13.033: 81.5298% ( 10) 00:30:57.809 13.033 - 13.091: 81.5810% ( 4) 00:30:57.809 13.091 - 13.149: 81.6577% ( 6) 00:30:57.809 13.149 - 13.207: 81.8112% ( 12) 00:30:57.809 13.207 - 13.265: 81.8752% ( 5) 00:30:57.809 13.265 - 13.324: 81.9519% ( 6) 00:30:57.809 13.324 - 13.382: 82.0159% ( 5) 00:30:57.809 13.382 - 13.440: 82.0287% ( 1) 00:30:57.809 13.440 - 13.498: 82.0926% ( 5) 00:30:57.809 13.498 - 13.556: 82.1182% ( 2) 00:30:57.809 13.615 - 13.673: 82.1566% ( 3) 00:30:57.809 13.673 - 13.731: 82.1821% ( 2) 00:30:57.809 13.731 - 13.789: 82.2077% ( 2) 00:30:57.809 13.789 - 13.847: 82.2717% ( 5) 00:30:57.809 13.847 - 13.905: 82.2845% ( 1) 00:30:57.809 13.964 - 14.022: 82.3101% ( 2) 00:30:57.809 14.022 - 14.080: 82.3356% ( 2) 00:30:57.809 14.080 - 14.138: 82.3740% ( 3) 00:30:57.809 14.138 - 14.196: 82.3868% ( 1) 00:30:57.809 14.196 - 14.255: 82.3996% ( 1) 00:30:57.809 14.255 - 14.313: 82.4124% ( 1) 00:30:57.809 14.313 - 14.371: 82.4252% ( 1) 00:30:57.809 14.371 - 14.429: 82.4508% ( 2) 00:30:57.809 14.429 - 14.487: 82.4635% ( 1) 00:30:57.809 14.487 - 14.545: 82.4763% ( 1) 00:30:57.809 14.545 - 14.604: 82.4891% ( 1) 00:30:57.809 14.720 - 14.778: 82.5147% ( 2) 00:30:57.809 14.836 - 14.895: 82.5275% ( 1) 00:30:57.809 14.895 - 15.011: 82.6042% ( 6) 00:30:57.809 15.011 - 15.127: 82.6170% ( 1) 00:30:57.809 15.127 - 15.244: 82.7194% ( 8) 00:30:57.809 15.244 - 15.360: 82.7961% ( 6) 00:30:57.809 15.360 - 15.476: 82.8729% ( 6) 00:30:57.809 15.476 - 15.593: 82.9240% ( 4) 00:30:57.809 15.593 - 15.709: 82.9624% ( 3) 00:30:57.809 15.709 - 15.825: 83.0008% ( 3) 00:30:57.809 15.825 - 15.942: 83.0136% ( 1) 00:30:57.809 15.942 - 16.058: 83.0775% ( 5) 00:30:57.809 16.058 - 16.175: 83.1159% ( 3) 00:30:57.809 16.175 - 16.291: 83.1287% ( 1) 00:30:57.809 16.291 - 16.407: 83.1543% ( 2) 00:30:57.809 16.407 - 16.524: 83.2054% ( 4) 00:30:57.809 16.524 - 16.640: 83.2694% ( 5) 00:30:57.809 16.640 - 16.756: 83.3589% ( 7) 00:30:57.809 16.756 - 16.873: 83.3717% ( 1) 00:30:57.809 16.873 - 16.989: 83.4229% ( 4) 00:30:57.809 16.989 - 17.105: 83.4612% ( 3) 00:30:57.809 17.105 - 17.222: 83.5124% ( 4) 00:30:57.809 17.222 - 17.338: 83.6147% ( 8) 00:30:57.809 17.338 - 17.455: 83.6787% ( 5) 00:30:57.809 17.455 - 17.571: 83.7171% ( 3) 00:30:57.809 17.571 - 17.687: 83.7426% ( 2) 00:30:57.809 17.687 - 17.804: 83.7682% ( 2) 00:30:57.809 17.804 - 17.920: 83.8194% ( 4) 00:30:57.809 17.920 - 18.036: 83.8322% ( 1) 00:30:57.809 18.036 - 18.153: 83.8450% ( 1) 00:30:57.809 18.153 - 18.269: 83.8961% ( 4) 00:30:57.809 18.269 - 18.385: 83.9473% ( 4) 00:30:57.809 18.385 - 18.502: 84.0113% ( 5) 00:30:57.809 18.502 - 18.618: 84.0496% ( 3) 00:30:57.809 18.618 - 18.735: 84.1136% ( 5) 00:30:57.809 18.735 - 18.851: 84.1264% ( 1) 00:30:57.809 18.851 - 18.967: 84.1392% ( 1) 00:30:57.809 18.967 - 19.084: 84.1520% ( 1) 00:30:57.809 19.084 - 19.200: 84.2031% ( 4) 00:30:57.809 19.200 - 19.316: 84.2287% ( 2) 00:30:57.809 19.433 - 19.549: 84.3182% ( 7) 00:30:57.809 19.549 - 19.665: 84.3566% ( 3) 00:30:57.809 19.665 - 19.782: 84.3950% ( 3) 00:30:57.809 19.782 - 19.898: 84.4206% ( 2) 00:30:57.809 19.898 - 20.015: 84.4461% ( 2) 00:30:57.809 20.015 - 20.131: 84.4589% ( 1) 00:30:57.809 20.131 - 20.247: 84.4717% ( 1) 00:30:57.809 20.247 - 20.364: 84.5101% ( 3) 00:30:57.809 20.364 - 20.480: 84.5357% ( 2) 00:30:57.809 20.480 - 20.596: 84.5613% ( 2) 00:30:57.810 20.596 - 20.713: 84.6124% ( 4) 00:30:57.810 20.713 - 20.829: 84.6508% ( 3) 00:30:57.810 20.829 - 20.945: 84.6892% ( 3) 00:30:57.810 20.945 - 21.062: 84.7148% ( 2) 00:30:57.810 21.062 - 21.178: 84.7276% ( 1) 00:30:57.810 21.178 - 21.295: 84.7403% ( 1) 00:30:57.810 21.295 - 21.411: 84.7659% ( 2) 00:30:57.810 21.411 - 21.527: 84.7787% ( 1) 00:30:57.810 21.527 - 21.644: 84.8171% ( 3) 00:30:57.810 21.760 - 21.876: 84.8683% ( 4) 00:30:57.810 21.876 - 21.993: 84.9194% ( 4) 00:30:57.810 21.993 - 22.109: 84.9450% ( 2) 00:30:57.810 22.109 - 22.225: 84.9706% ( 2) 00:30:57.810 22.225 - 22.342: 84.9834% ( 1) 00:30:57.810 22.342 - 22.458: 85.0090% ( 2) 00:30:57.810 22.458 - 22.575: 85.0601% ( 4) 00:30:57.810 22.691 - 22.807: 85.1113% ( 4) 00:30:57.810 22.807 - 22.924: 85.1624% ( 4) 00:30:57.810 22.924 - 23.040: 85.2136% ( 4) 00:30:57.810 23.040 - 23.156: 85.2264% ( 1) 00:30:57.810 23.156 - 23.273: 85.2520% ( 2) 00:30:57.810 23.389 - 23.505: 85.2648% ( 1) 00:30:57.810 23.505 - 23.622: 85.2904% ( 2) 00:30:57.810 23.738 - 23.855: 85.3415% ( 4) 00:30:57.810 23.855 - 23.971: 85.3543% ( 1) 00:30:57.810 24.087 - 24.204: 85.3671% ( 1) 00:30:57.810 24.320 - 24.436: 85.3799% ( 1) 00:30:57.810 24.436 - 24.553: 85.4055% ( 2) 00:30:57.810 24.553 - 24.669: 85.4183% ( 1) 00:30:57.810 25.135 - 25.251: 85.4438% ( 2) 00:30:57.810 25.251 - 25.367: 85.5590% ( 9) 00:30:57.810 25.367 - 25.484: 86.0067% ( 35) 00:30:57.810 25.484 - 25.600: 86.6206% ( 48) 00:30:57.810 25.600 - 25.716: 87.2218% ( 47) 00:30:57.810 25.716 - 25.833: 87.8486% ( 49) 00:30:57.810 25.833 - 25.949: 88.4497% ( 47) 00:30:57.810 25.949 - 26.065: 88.8974% ( 35) 00:30:57.810 26.065 - 26.182: 89.3451% ( 35) 00:30:57.810 26.182 - 26.298: 89.9335% ( 46) 00:30:57.810 26.298 - 26.415: 90.5858% ( 51) 00:30:57.810 26.415 - 26.531: 91.1103% ( 41) 00:30:57.810 26.531 - 26.647: 91.3277% ( 17) 00:30:57.810 26.647 - 26.764: 91.4556% ( 10) 00:30:57.810 26.764 - 26.880: 91.7242% ( 21) 00:30:57.810 26.880 - 26.996: 92.2870% ( 44) 00:30:57.810 26.996 - 27.113: 93.4126% ( 88) 00:30:57.810 27.113 - 27.229: 94.8196% ( 110) 00:30:57.810 27.229 - 27.345: 96.4185% ( 125) 00:30:57.810 27.345 - 27.462: 97.7488% ( 104) 00:30:57.810 27.462 - 27.578: 98.3628% ( 48) 00:30:57.810 27.578 - 27.695: 98.7337% ( 29) 00:30:57.810 27.695 - 27.811: 98.8872% ( 12) 00:30:57.810 27.811 - 27.927: 98.9895% ( 8) 00:30:57.810 27.927 - 28.044: 99.0790% ( 7) 00:30:57.810 28.044 - 28.160: 99.1174% ( 3) 00:30:57.810 28.393 - 28.509: 99.1302% ( 1) 00:30:57.810 28.625 - 28.742: 99.1430% ( 1) 00:30:57.810 28.975 - 29.091: 99.1558% ( 1) 00:30:57.810 30.022 - 30.255: 99.1942% ( 3) 00:30:57.810 30.255 - 30.487: 99.2070% ( 1) 00:30:57.810 30.487 - 30.720: 99.2197% ( 1) 00:30:57.810 30.720 - 30.953: 99.2325% ( 1) 00:30:57.810 30.953 - 31.185: 99.2453% ( 1) 00:30:57.810 31.185 - 31.418: 99.2837% ( 3) 00:30:57.810 31.418 - 31.651: 99.3093% ( 2) 00:30:57.810 31.651 - 31.884: 99.3221% ( 1) 00:30:57.810 31.884 - 32.116: 99.3605% ( 3) 00:30:57.810 32.116 - 32.349: 99.3860% ( 2) 00:30:57.810 32.349 - 32.582: 99.4116% ( 2) 00:30:57.810 32.582 - 32.815: 99.4628% ( 4) 00:30:57.810 32.815 - 33.047: 99.4884% ( 2) 00:30:57.810 33.047 - 33.280: 99.5139% ( 2) 00:30:57.810 33.513 - 33.745: 99.5267% ( 1) 00:30:57.810 33.745 - 33.978: 99.5395% ( 1) 00:30:57.810 34.211 - 34.444: 99.5523% ( 1) 00:30:57.810 34.909 - 35.142: 99.6163% ( 5) 00:30:57.810 36.538 - 36.771: 99.6291% ( 1) 00:30:57.810 37.236 - 37.469: 99.6419% ( 1) 00:30:57.810 37.469 - 37.702: 99.6546% ( 1) 00:30:57.810 37.702 - 37.935: 99.6674% ( 1) 00:30:57.810 39.098 - 39.331: 99.6930% ( 2) 00:30:57.810 39.564 - 39.796: 99.7058% ( 1) 00:30:57.810 39.796 - 40.029: 99.7186% ( 1) 00:30:57.810 40.495 - 40.727: 99.7314% ( 1) 00:30:57.810 40.727 - 40.960: 99.7442% ( 1) 00:30:57.810 41.193 - 41.425: 99.7570% ( 1) 00:30:57.810 41.891 - 42.124: 99.7826% ( 2) 00:30:57.810 42.124 - 42.356: 99.7953% ( 1) 00:30:57.810 42.589 - 42.822: 99.8081% ( 1) 00:30:57.810 42.822 - 43.055: 99.8337% ( 2) 00:30:57.810 43.287 - 43.520: 99.8465% ( 1) 00:30:57.810 43.753 - 43.985: 99.8593% ( 1) 00:30:57.810 44.218 - 44.451: 99.8721% ( 1) 00:30:57.810 44.451 - 44.684: 99.8849% ( 1) 00:30:57.810 45.615 - 45.847: 99.8977% ( 1) 00:30:57.810 48.175 - 48.407: 99.9233% ( 2) 00:30:57.810 53.760 - 53.993: 99.9360% ( 1) 00:30:57.810 57.018 - 57.251: 99.9488% ( 1) 00:30:57.810 57.484 - 57.716: 99.9616% ( 1) 00:30:57.810 58.880 - 59.113: 99.9744% ( 1) 00:30:57.810 81.920 - 82.385: 99.9872% ( 1) 00:30:57.810 96.349 - 96.815: 100.0000% ( 1) 00:30:57.810 00:30:57.810 Complete histogram 00:30:57.810 ================== 00:30:57.810 Range in us Cumulative Count 00:30:57.810 7.244 - 7.273: 0.0512% ( 4) 00:30:57.810 7.273 - 7.302: 0.3709% ( 25) 00:30:57.810 7.302 - 7.331: 0.9210% ( 43) 00:30:57.810 7.331 - 7.360: 1.7524% ( 65) 00:30:57.810 7.360 - 7.389: 2.5454% ( 62) 00:30:57.810 7.389 - 7.418: 3.4408% ( 70) 00:30:57.810 7.418 - 7.447: 3.9908% ( 43) 00:30:57.810 7.447 - 7.505: 5.1164% ( 88) 00:30:57.810 7.505 - 7.564: 6.0246% ( 71) 00:30:57.810 7.564 - 7.622: 6.8176% ( 62) 00:30:57.810 7.622 - 7.680: 10.3351% ( 275) 00:30:57.810 7.680 - 7.738: 16.8969% ( 513) 00:30:57.810 7.738 - 7.796: 22.3075% ( 423) 00:30:57.810 7.796 - 7.855: 29.5856% ( 569) 00:30:57.810 7.855 - 7.913: 38.7695% ( 718) 00:30:57.810 7.913 - 7.971: 45.8941% ( 557) 00:30:57.810 7.971 - 8.029: 51.2151% ( 416) 00:30:57.810 8.029 - 8.087: 59.3246% ( 634) 00:30:57.810 8.087 - 8.145: 66.0015% ( 522) 00:30:57.810 8.145 - 8.204: 70.0179% ( 314) 00:30:57.810 8.204 - 8.262: 72.8831% ( 224) 00:30:57.810 8.262 - 8.320: 75.5052% ( 205) 00:30:57.810 8.320 - 8.378: 77.5518% ( 160) 00:30:57.810 8.378 - 8.436: 78.6518% ( 86) 00:30:57.810 8.436 - 8.495: 79.2402% ( 46) 00:30:57.810 8.495 - 8.553: 79.8670% ( 49) 00:30:57.810 8.553 - 8.611: 80.4170% ( 43) 00:30:57.810 8.611 - 8.669: 80.7879% ( 29) 00:30:57.810 8.669 - 8.727: 81.0949% ( 24) 00:30:57.810 8.727 - 8.785: 81.4914% ( 31) 00:30:57.810 8.785 - 8.844: 81.9647% ( 37) 00:30:57.810 8.844 - 8.902: 82.4252% ( 36) 00:30:57.810 8.902 - 8.960: 82.8089% ( 30) 00:30:57.810 8.960 - 9.018: 83.1543% ( 27) 00:30:57.810 9.018 - 9.076: 83.4612% ( 24) 00:30:57.810 9.076 - 9.135: 83.7938% ( 26) 00:30:57.810 9.135 - 9.193: 83.9729% ( 14) 00:30:57.810 9.193 - 9.251: 84.1520% ( 14) 00:30:57.810 9.251 - 9.309: 84.2287% ( 6) 00:30:57.810 9.309 - 9.367: 84.3182% ( 7) 00:30:57.810 9.367 - 9.425: 84.4206% ( 8) 00:30:57.810 9.425 - 9.484: 84.4334% ( 1) 00:30:57.810 9.484 - 9.542: 84.4461% ( 1) 00:30:57.810 9.542 - 9.600: 84.4589% ( 1) 00:30:57.810 9.600 - 9.658: 84.4845% ( 2) 00:30:57.810 9.658 - 9.716: 84.5101% ( 2) 00:30:57.810 9.716 - 9.775: 84.5357% ( 2) 00:30:57.810 9.775 - 9.833: 84.5485% ( 1) 00:30:57.810 9.833 - 9.891: 84.5613% ( 1) 00:30:57.810 9.891 - 9.949: 84.5741% ( 1) 00:30:57.810 9.949 - 10.007: 84.5996% ( 2) 00:30:57.810 10.007 - 10.065: 84.6124% ( 1) 00:30:57.810 10.065 - 10.124: 84.6252% ( 1) 00:30:57.810 10.298 - 10.356: 84.6508% ( 2) 00:30:57.810 10.473 - 10.531: 84.6636% ( 1) 00:30:57.810 10.880 - 10.938: 84.6764% ( 1) 00:30:57.810 10.938 - 10.996: 84.7020% ( 2) 00:30:57.810 10.996 - 11.055: 84.7148% ( 1) 00:30:57.810 11.055 - 11.113: 84.7403% ( 2) 00:30:57.810 11.113 - 11.171: 84.7531% ( 1) 00:30:57.810 11.520 - 11.578: 84.7659% ( 1) 00:30:57.810 11.578 - 11.636: 84.7787% ( 1) 00:30:57.810 11.636 - 11.695: 84.7915% ( 1) 00:30:57.810 12.102 - 12.160: 84.8043% ( 1) 00:30:57.810 12.160 - 12.218: 84.8171% ( 1) 00:30:57.810 12.509 - 12.567: 84.8299% ( 1) 00:30:57.810 12.684 - 12.742: 84.8427% ( 1) 00:30:57.810 12.916 - 12.975: 84.8555% ( 1) 00:30:57.810 12.975 - 13.033: 84.8683% ( 1) 00:30:57.810 13.091 - 13.149: 84.8938% ( 2) 00:30:57.810 13.265 - 13.324: 84.9066% ( 1) 00:30:57.810 13.440 - 13.498: 84.9322% ( 2) 00:30:57.810 13.498 - 13.556: 84.9706% ( 3) 00:30:57.810 13.615 - 13.673: 84.9834% ( 1) 00:30:57.810 13.673 - 13.731: 84.9962% ( 1) 00:30:57.810 13.789 - 13.847: 85.0090% ( 1) 00:30:57.810 13.847 - 13.905: 85.0217% ( 1) 00:30:57.810 13.905 - 13.964: 85.0345% ( 1) 00:30:57.810 13.964 - 14.022: 85.0473% ( 1) 00:30:57.810 14.022 - 14.080: 85.0729% ( 2) 00:30:57.810 14.196 - 14.255: 85.1113% ( 3) 00:30:57.810 14.313 - 14.371: 85.1369% ( 2) 00:30:57.810 14.371 - 14.429: 85.1752% ( 3) 00:30:57.810 14.487 - 14.545: 85.1880% ( 1) 00:30:57.810 14.545 - 14.604: 85.2008% ( 1) 00:30:57.810 14.604 - 14.662: 85.2136% ( 1) 00:30:57.810 14.662 - 14.720: 85.2392% ( 2) 00:30:57.810 14.720 - 14.778: 85.2520% ( 1) 00:30:57.810 14.836 - 14.895: 85.2648% ( 1) 00:30:57.810 14.895 - 15.011: 85.2776% ( 1) 00:30:57.810 15.011 - 15.127: 85.3031% ( 2) 00:30:57.810 15.127 - 15.244: 85.3287% ( 2) 00:30:57.810 15.244 - 15.360: 85.3415% ( 1) 00:30:57.810 15.360 - 15.476: 85.3543% ( 1) 00:30:57.811 15.476 - 15.593: 85.3927% ( 3) 00:30:57.811 15.593 - 15.709: 85.4566% ( 5) 00:30:57.811 15.709 - 15.825: 85.4694% ( 1) 00:30:57.811 16.058 - 16.175: 85.5206% ( 4) 00:30:57.811 16.291 - 16.407: 85.5590% ( 3) 00:30:57.811 16.524 - 16.640: 85.5718% ( 1) 00:30:57.811 16.640 - 16.756: 85.5845% ( 1) 00:30:57.811 16.873 - 16.989: 85.5973% ( 1) 00:30:57.811 17.105 - 17.222: 85.6101% ( 1) 00:30:57.811 17.222 - 17.338: 85.6229% ( 1) 00:30:57.811 17.338 - 17.455: 85.6485% ( 2) 00:30:57.811 17.455 - 17.571: 85.6613% ( 1) 00:30:57.811 17.687 - 17.804: 85.6997% ( 3) 00:30:57.811 17.804 - 17.920: 85.7125% ( 1) 00:30:57.811 17.920 - 18.036: 85.7252% ( 1) 00:30:57.811 18.269 - 18.385: 85.7508% ( 2) 00:30:57.811 18.385 - 18.502: 85.7636% ( 1) 00:30:57.811 18.618 - 18.735: 85.8020% ( 3) 00:30:57.811 19.084 - 19.200: 85.8532% ( 4) 00:30:57.811 19.200 - 19.316: 85.8660% ( 1) 00:30:57.811 19.316 - 19.433: 85.8787% ( 1) 00:30:57.811 19.433 - 19.549: 85.9043% ( 2) 00:30:57.811 19.549 - 19.665: 85.9171% ( 1) 00:30:57.811 19.665 - 19.782: 85.9427% ( 2) 00:30:57.811 19.898 - 20.015: 85.9555% ( 1) 00:30:57.811 20.015 - 20.131: 85.9811% ( 2) 00:30:57.811 20.247 - 20.364: 86.0194% ( 3) 00:30:57.811 20.364 - 20.480: 86.0450% ( 2) 00:30:57.811 20.480 - 20.596: 86.0706% ( 2) 00:30:57.811 20.596 - 20.713: 86.0834% ( 1) 00:30:57.811 20.713 - 20.829: 86.1346% ( 4) 00:30:57.811 20.829 - 20.945: 86.1474% ( 1) 00:30:57.811 21.295 - 21.411: 86.1601% ( 1) 00:30:57.811 21.527 - 21.644: 86.1857% ( 2) 00:30:57.811 21.644 - 21.760: 86.2497% ( 5) 00:30:57.811 21.760 - 21.876: 86.4543% ( 16) 00:30:57.811 21.876 - 21.993: 86.8892% ( 34) 00:30:57.811 21.993 - 22.109: 87.4904% ( 47) 00:30:57.811 22.109 - 22.225: 88.0148% ( 41) 00:30:57.811 22.225 - 22.342: 88.4753% ( 36) 00:30:57.811 22.342 - 22.458: 89.0765% ( 47) 00:30:57.811 22.458 - 22.575: 89.8056% ( 57) 00:30:57.811 22.575 - 22.691: 90.4579% ( 51) 00:30:57.811 22.691 - 22.807: 91.1358% ( 53) 00:30:57.811 22.807 - 22.924: 91.6347% ( 39) 00:30:57.811 22.924 - 23.040: 92.0056% ( 29) 00:30:57.811 23.040 - 23.156: 92.2742% ( 21) 00:30:57.811 23.156 - 23.273: 92.5173% ( 19) 00:30:57.811 23.273 - 23.389: 92.9010% ( 30) 00:30:57.811 23.389 - 23.505: 93.6685% ( 60) 00:30:57.811 23.505 - 23.622: 94.8580% ( 93) 00:30:57.811 23.622 - 23.738: 96.0987% ( 97) 00:30:57.811 23.738 - 23.855: 97.3651% ( 99) 00:30:57.811 23.855 - 23.971: 98.0814% ( 56) 00:30:57.811 23.971 - 24.087: 98.4267% ( 27) 00:30:57.811 24.087 - 24.204: 98.6442% ( 17) 00:30:57.811 24.204 - 24.320: 98.8104% ( 13) 00:30:57.811 24.320 - 24.436: 98.8616% ( 4) 00:30:57.811 24.436 - 24.553: 98.9639% ( 8) 00:30:57.811 24.553 - 24.669: 99.0535% ( 7) 00:30:57.811 24.669 - 24.785: 99.0918% ( 3) 00:30:57.811 24.785 - 24.902: 99.1814% ( 7) 00:30:57.811 24.902 - 25.018: 99.2070% ( 2) 00:30:57.811 25.018 - 25.135: 99.2325% ( 2) 00:30:57.811 25.251 - 25.367: 99.2581% ( 2) 00:30:57.811 25.716 - 25.833: 99.2709% ( 1) 00:30:57.811 26.182 - 26.298: 99.3093% ( 3) 00:30:57.811 26.531 - 26.647: 99.3221% ( 1) 00:30:57.811 26.647 - 26.764: 99.3349% ( 1) 00:30:57.811 26.764 - 26.880: 99.3605% ( 2) 00:30:57.811 27.113 - 27.229: 99.3732% ( 1) 00:30:57.811 27.462 - 27.578: 99.3860% ( 1) 00:30:57.811 27.695 - 27.811: 99.3988% ( 1) 00:30:57.811 27.811 - 27.927: 99.4116% ( 1) 00:30:57.811 27.927 - 28.044: 99.4244% ( 1) 00:30:57.811 28.044 - 28.160: 99.4372% ( 1) 00:30:57.811 28.276 - 28.393: 99.4500% ( 1) 00:30:57.811 28.509 - 28.625: 99.4628% ( 1) 00:30:57.811 28.625 - 28.742: 99.5012% ( 3) 00:30:57.811 28.742 - 28.858: 99.5139% ( 1) 00:30:57.811 28.975 - 29.091: 99.5267% ( 1) 00:30:57.811 29.091 - 29.207: 99.5395% ( 1) 00:30:57.811 29.556 - 29.673: 99.5651% ( 2) 00:30:57.811 29.673 - 29.789: 99.5779% ( 1) 00:30:57.811 29.789 - 30.022: 99.6035% ( 2) 00:30:57.811 30.487 - 30.720: 99.6163% ( 1) 00:30:57.811 30.953 - 31.185: 99.6419% ( 2) 00:30:57.811 31.185 - 31.418: 99.6674% ( 2) 00:30:57.811 31.418 - 31.651: 99.6802% ( 1) 00:30:57.811 32.815 - 33.047: 99.6930% ( 1) 00:30:57.811 33.280 - 33.513: 99.7058% ( 1) 00:30:57.811 33.513 - 33.745: 99.7186% ( 1) 00:30:57.811 34.676 - 34.909: 99.7442% ( 2) 00:30:57.811 34.909 - 35.142: 99.7698% ( 2) 00:30:57.811 35.142 - 35.375: 99.7826% ( 1) 00:30:57.811 35.840 - 36.073: 99.7953% ( 1) 00:30:57.811 36.305 - 36.538: 99.8081% ( 1) 00:30:57.811 38.865 - 39.098: 99.8209% ( 1) 00:30:57.811 39.564 - 39.796: 99.8337% ( 1) 00:30:57.811 40.262 - 40.495: 99.8465% ( 1) 00:30:57.811 44.218 - 44.451: 99.8593% ( 1) 00:30:57.811 44.916 - 45.149: 99.8977% ( 3) 00:30:57.811 46.313 - 46.545: 99.9105% ( 1) 00:30:57.811 47.709 - 47.942: 99.9233% ( 1) 00:30:57.811 49.804 - 50.036: 99.9360% ( 1) 00:30:57.811 50.036 - 50.269: 99.9488% ( 1) 00:30:57.811 56.320 - 56.553: 99.9616% ( 1) 00:30:57.811 62.371 - 62.836: 99.9744% ( 1) 00:30:57.811 94.022 - 94.487: 99.9872% ( 1) 00:30:57.811 98.211 - 98.676: 100.0000% ( 1) 00:30:57.811 00:30:57.811 ************************************ 00:30:57.811 END TEST nvme_overhead 00:30:57.811 ************************************ 00:30:57.811 00:30:57.811 real 0m1.311s 00:30:57.811 user 0m1.127s 00:30:57.811 sys 0m0.113s 00:30:57.811 10:44:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.811 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.811 10:44:51 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:57.811 10:44:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:57.811 10:44:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.811 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.811 ************************************ 00:30:57.811 START TEST nvme_arbitration 00:30:57.811 ************************************ 00:30:57.811 10:44:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:31:01.099 Initializing NVMe Controllers 00:31:01.099 Attached to 0000:00:06.0 00:31:01.099 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:31:01.099 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:31:01.099 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:31:01.099 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:31:01.099 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:31:01.099 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:31:01.099 Initialization complete. Launching workers. 00:31:01.099 Starting thread on core 1 with urgent priority queue 00:31:01.099 Starting thread on core 2 with urgent priority queue 00:31:01.099 Starting thread on core 3 with urgent priority queue 00:31:01.099 Starting thread on core 0 with urgent priority queue 00:31:01.099 QEMU NVMe Ctrl (12340 ) core 0: 1813.33 IO/s 55.15 secs/100000 ios 00:31:01.099 QEMU NVMe Ctrl (12340 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:31:01.099 QEMU NVMe Ctrl (12340 ) core 2: 1002.67 IO/s 99.73 secs/100000 ios 00:31:01.099 QEMU NVMe Ctrl (12340 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:31:01.099 ======================================================== 00:31:01.099 00:31:01.099 ************************************ 00:31:01.099 END TEST nvme_arbitration 00:31:01.099 ************************************ 00:31:01.099 00:31:01.099 real 0m3.453s 00:31:01.099 user 0m9.430s 00:31:01.099 sys 0m0.164s 00:31:01.099 10:44:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.099 10:44:54 -- common/autotest_common.sh@10 -- # set +x 00:31:01.099 10:44:54 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:31:01.099 10:44:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:01.099 10:44:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.099 10:44:54 -- common/autotest_common.sh@10 -- # set +x 00:31:01.099 ************************************ 00:31:01.099 START TEST nvme_single_aen 00:31:01.099 ************************************ 00:31:01.099 10:44:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:31:01.358 [2024-07-12 10:44:55.024851] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:01.359 [2024-07-12 10:44:55.025117] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.359 [2024-07-12 10:44:55.244229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:01.618 Asynchronous Event Request test 00:31:01.618 Attached to 0000:00:06.0 00:31:01.618 Reset controller to setup AER completions for this process 00:31:01.618 Registering asynchronous event callbacks... 00:31:01.618 Getting orig temperature thresholds of all controllers 00:31:01.618 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:01.618 Setting all controllers temperature threshold low to trigger AER 00:31:01.618 Waiting for all controllers temperature threshold to be set lower 00:31:01.618 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:01.618 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:01.618 Waiting for all controllers to trigger AER and reset threshold 00:31:01.618 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:01.618 Cleaning up... 00:31:01.618 ************************************ 00:31:01.618 END TEST nvme_single_aen 00:31:01.618 ************************************ 00:31:01.618 00:31:01.618 real 0m0.317s 00:31:01.618 user 0m0.084s 00:31:01.618 sys 0m0.144s 00:31:01.618 10:44:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.618 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.618 10:44:55 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:31:01.618 10:44:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:01.618 10:44:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.618 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.618 ************************************ 00:31:01.618 START TEST nvme_doorbell_aers 00:31:01.618 ************************************ 00:31:01.618 10:44:55 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:31:01.618 10:44:55 -- nvme/nvme.sh@70 -- # bdfs=() 00:31:01.618 10:44:55 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:31:01.618 10:44:55 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:31:01.618 10:44:55 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:31:01.618 10:44:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:01.618 10:44:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:01.618 10:44:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:01.618 10:44:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:01.618 10:44:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:01.618 10:44:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:01.618 10:44:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:01.618 10:44:55 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:01.618 10:44:55 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:01.876 [2024-07-12 10:44:55.686607] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143891) is not found. Dropping the request. 00:31:11.846 Executing: test_write_invalid_db 00:31:11.846 Waiting for AER completion... 00:31:11.847 Failure: test_write_invalid_db 00:31:11.847 00:31:11.847 Executing: test_invalid_db_write_overflow_sq 00:31:11.847 Waiting for AER completion... 00:31:11.847 Failure: test_invalid_db_write_overflow_sq 00:31:11.847 00:31:11.847 Executing: test_invalid_db_write_overflow_cq 00:31:11.847 Waiting for AER completion... 00:31:11.847 Failure: test_invalid_db_write_overflow_cq 00:31:11.847 00:31:11.847 ************************************ 00:31:11.847 END TEST nvme_doorbell_aers 00:31:11.847 ************************************ 00:31:11.847 00:31:11.847 real 0m10.112s 00:31:11.847 user 0m8.469s 00:31:11.847 sys 0m1.589s 00:31:11.847 10:45:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.847 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.847 10:45:05 -- nvme/nvme.sh@97 -- # uname 00:31:11.847 10:45:05 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:11.847 10:45:05 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:11.847 10:45:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:31:11.847 10:45:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:11.847 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.847 ************************************ 00:31:11.847 START TEST nvme_multi_aen 00:31:11.847 ************************************ 00:31:11.847 10:45:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:11.847 [2024-07-12 10:45:05.547630] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:11.847 [2024-07-12 10:45:05.548002] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.847 [2024-07-12 10:45:05.742049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:11.847 [2024-07-12 10:45:05.742262] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143891) is not found. Dropping the request. 00:31:11.847 [2024-07-12 10:45:05.742496] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143891) is not found. Dropping the request. 00:31:11.847 [2024-07-12 10:45:05.742629] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143891) is not found. Dropping the request. 00:31:11.847 [2024-07-12 10:45:05.749499] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:11.847 [2024-07-12 10:45:05.749941] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.847 Child process pid: 144107 00:31:12.414 [Child] Asynchronous Event Request test 00:31:12.414 [Child] Attached to 0000:00:06.0 00:31:12.414 [Child] Registering asynchronous event callbacks... 00:31:12.414 [Child] Getting orig temperature thresholds of all controllers 00:31:12.414 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:12.414 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:12.414 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:12.414 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:12.414 [Child] Cleaning up... 00:31:12.415 Asynchronous Event Request test 00:31:12.415 Attached to 0000:00:06.0 00:31:12.415 Reset controller to setup AER completions for this process 00:31:12.415 Registering asynchronous event callbacks... 00:31:12.415 Getting orig temperature thresholds of all controllers 00:31:12.415 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:12.415 Setting all controllers temperature threshold low to trigger AER 00:31:12.415 Waiting for all controllers temperature threshold to be set lower 00:31:12.415 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:12.415 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:12.415 Waiting for all controllers to trigger AER and reset threshold 00:31:12.415 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:12.415 Cleaning up... 00:31:12.415 ************************************ 00:31:12.415 END TEST nvme_multi_aen 00:31:12.415 ************************************ 00:31:12.415 00:31:12.415 real 0m0.646s 00:31:12.415 user 0m0.245s 00:31:12.415 sys 0m0.238s 00:31:12.415 10:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.415 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.415 10:45:06 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:12.415 10:45:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:12.415 10:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.415 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.415 ************************************ 00:31:12.415 START TEST nvme_startup 00:31:12.415 ************************************ 00:31:12.415 10:45:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:12.674 Initializing NVMe Controllers 00:31:12.674 Attached to 0000:00:06.0 00:31:12.674 Initialization complete. 00:31:12.674 Time used:210071.922 (us). 00:31:12.674 ************************************ 00:31:12.674 END TEST nvme_startup 00:31:12.674 ************************************ 00:31:12.674 00:31:12.674 real 0m0.304s 00:31:12.674 user 0m0.098s 00:31:12.674 sys 0m0.124s 00:31:12.674 10:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.674 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.674 10:45:06 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:12.674 10:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:12.674 10:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.674 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.674 ************************************ 00:31:12.674 START TEST nvme_multi_secondary 00:31:12.674 ************************************ 00:31:12.674 10:45:06 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:31:12.674 10:45:06 -- nvme/nvme.sh@52 -- # pid0=144173 00:31:12.674 10:45:06 -- nvme/nvme.sh@54 -- # pid1=144174 00:31:12.674 10:45:06 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:12.674 10:45:06 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:12.674 10:45:06 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:16.858 Initializing NVMe Controllers 00:31:16.858 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:16.858 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:16.858 Initialization complete. Launching workers. 00:31:16.858 ======================================================== 00:31:16.858 Latency(us) 00:31:16.858 Device Information : IOPS MiB/s Average min max 00:31:16.858 PCIE (0000:00:06.0) NSID 1 from core 2: 13837.98 54.05 1155.98 145.63 24487.62 00:31:16.858 ======================================================== 00:31:16.858 Total : 13837.98 54.05 1155.98 145.63 24487.62 00:31:16.858 00:31:16.858 10:45:09 -- nvme/nvme.sh@56 -- # wait 144173 00:31:16.858 Initializing NVMe Controllers 00:31:16.858 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:16.858 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:16.858 Initialization complete. Launching workers. 00:31:16.858 ======================================================== 00:31:16.858 Latency(us) 00:31:16.858 Device Information : IOPS MiB/s Average min max 00:31:16.858 PCIE (0000:00:06.0) NSID 1 from core 1: 34084.99 133.14 469.07 113.28 3773.16 00:31:16.858 ======================================================== 00:31:16.858 Total : 34084.99 133.14 469.07 113.28 3773.16 00:31:16.858 00:31:18.238 Initializing NVMe Controllers 00:31:18.238 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:18.238 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:18.238 Initialization complete. Launching workers. 00:31:18.238 ======================================================== 00:31:18.238 Latency(us) 00:31:18.238 Device Information : IOPS MiB/s Average min max 00:31:18.238 PCIE (0000:00:06.0) NSID 1 from core 0: 38356.78 149.83 416.81 101.81 1658.88 00:31:18.238 ======================================================== 00:31:18.238 Total : 38356.78 149.83 416.81 101.81 1658.88 00:31:18.238 00:31:18.238 10:45:12 -- nvme/nvme.sh@57 -- # wait 144174 00:31:18.238 10:45:12 -- nvme/nvme.sh@61 -- # pid0=144246 00:31:18.238 10:45:12 -- nvme/nvme.sh@63 -- # pid1=144247 00:31:18.238 10:45:12 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:18.238 10:45:12 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:18.238 10:45:12 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:22.501 Initializing NVMe Controllers 00:31:22.501 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:22.501 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:22.501 Initialization complete. Launching workers. 00:31:22.501 ======================================================== 00:31:22.501 Latency(us) 00:31:22.501 Device Information : IOPS MiB/s Average min max 00:31:22.501 PCIE (0000:00:06.0) NSID 1 from core 1: 33945.32 132.60 471.02 111.32 1443.14 00:31:22.501 ======================================================== 00:31:22.501 Total : 33945.32 132.60 471.02 111.32 1443.14 00:31:22.501 00:31:22.501 Initializing NVMe Controllers 00:31:22.501 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:22.501 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:22.501 Initialization complete. Launching workers. 00:31:22.501 ======================================================== 00:31:22.501 Latency(us) 00:31:22.501 Device Information : IOPS MiB/s Average min max 00:31:22.501 PCIE (0000:00:06.0) NSID 1 from core 0: 31692.32 123.80 504.49 136.23 2757.74 00:31:22.501 ======================================================== 00:31:22.501 Total : 31692.32 123.80 504.49 136.23 2757.74 00:31:22.501 00:31:23.875 Initializing NVMe Controllers 00:31:23.875 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:23.875 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:23.875 Initialization complete. Launching workers. 00:31:23.875 ======================================================== 00:31:23.875 Latency(us) 00:31:23.875 Device Information : IOPS MiB/s Average min max 00:31:23.875 PCIE (0000:00:06.0) NSID 1 from core 2: 16727.59 65.34 955.50 138.30 33025.89 00:31:23.875 ======================================================== 00:31:23.875 Total : 16727.59 65.34 955.50 138.30 33025.89 00:31:23.875 00:31:23.875 ************************************ 00:31:23.875 END TEST nvme_multi_secondary 00:31:23.875 ************************************ 00:31:23.875 10:45:17 -- nvme/nvme.sh@65 -- # wait 144246 00:31:23.875 10:45:17 -- nvme/nvme.sh@66 -- # wait 144247 00:31:23.875 00:31:23.875 real 0m10.953s 00:31:23.875 user 0m18.682s 00:31:23.875 sys 0m0.805s 00:31:23.875 10:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.875 10:45:17 -- common/autotest_common.sh@10 -- # set +x 00:31:23.875 10:45:17 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:23.875 10:45:17 -- nvme/nvme.sh@102 -- # kill_stub 00:31:23.875 10:45:17 -- common/autotest_common.sh@1065 -- # [[ -e /proc/143429 ]] 00:31:23.875 10:45:17 -- common/autotest_common.sh@1066 -- # kill 143429 00:31:23.875 10:45:17 -- common/autotest_common.sh@1067 -- # wait 143429 00:31:24.441 [2024-07-12 10:45:18.086556] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144106) is not found. Dropping the request. 00:31:24.441 [2024-07-12 10:45:18.086910] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144106) is not found. Dropping the request. 00:31:24.441 [2024-07-12 10:45:18.087150] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144106) is not found. Dropping the request. 00:31:24.441 [2024-07-12 10:45:18.087400] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144106) is not found. Dropping the request. 00:31:24.700 10:45:18 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:24.700 10:45:18 -- common/autotest_common.sh@1073 -- # echo 2 00:31:24.700 10:45:18 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:24.700 10:45:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:24.700 10:45:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:24.700 10:45:18 -- common/autotest_common.sh@10 -- # set +x 00:31:24.700 ************************************ 00:31:24.700 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:24.700 ************************************ 00:31:24.700 10:45:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:24.700 * Looking for test storage... 00:31:24.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:24.700 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:24.700 10:45:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:24.700 10:45:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:24.700 10:45:18 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:24.700 10:45:18 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:24.700 10:45:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:24.700 10:45:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:24.700 10:45:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:24.700 10:45:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:24.700 10:45:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:24.959 10:45:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:24.959 10:45:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:24.959 10:45:18 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:24.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=144424 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:24.959 10:45:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 144424 00:31:24.959 10:45:18 -- common/autotest_common.sh@819 -- # '[' -z 144424 ']' 00:31:24.959 10:45:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.959 10:45:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:24.959 10:45:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.959 10:45:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:24.959 10:45:18 -- common/autotest_common.sh@10 -- # set +x 00:31:24.959 [2024-07-12 10:45:18.744351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:24.959 [2024-07-12 10:45:18.744762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144424 ] 00:31:25.218 [2024-07-12 10:45:18.950969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.476 [2024-07-12 10:45:19.188218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:25.476 [2024-07-12 10:45:19.188799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.476 [2024-07-12 10:45:19.188924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.476 [2024-07-12 10:45:19.188998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.476 [2024-07-12 10:45:19.188995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.852 10:45:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:26.852 10:45:20 -- common/autotest_common.sh@852 -- # return 0 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:26.852 10:45:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.852 10:45:20 -- common/autotest_common.sh@10 -- # set +x 00:31:26.852 nvme0n1 00:31:26.852 10:45:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ECQW6.txt 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:26.852 10:45:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.852 10:45:20 -- common/autotest_common.sh@10 -- # set +x 00:31:26.852 true 00:31:26.852 10:45:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720781120 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=144452 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:26.852 10:45:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:28.754 10:45:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.754 10:45:22 -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 [2024-07-12 10:45:22.490664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:28.754 [2024-07-12 10:45:22.491178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:28.754 [2024-07-12 10:45:22.491435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:28.754 [2024-07-12 10:45:22.491565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.754 [2024-07-12 10:45:22.493615] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:28.754 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 144452 00:31:28.754 10:45:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 144452 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 144452 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.754 10:45:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.754 10:45:22 -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 10:45:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ECQW6.txt 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ECQW6.txt 00:31:28.754 10:45:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 144424 00:31:28.754 10:45:22 -- common/autotest_common.sh@926 -- # '[' -z 144424 ']' 00:31:28.754 10:45:22 -- common/autotest_common.sh@930 -- # kill -0 144424 00:31:28.754 10:45:22 -- common/autotest_common.sh@931 -- # uname 00:31:28.754 10:45:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:28.754 10:45:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144424 00:31:28.754 killing process with pid 144424 00:31:28.754 10:45:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:28.754 10:45:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:28.754 10:45:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144424' 00:31:28.754 10:45:22 -- common/autotest_common.sh@945 -- # kill 144424 00:31:28.754 10:45:22 -- common/autotest_common.sh@950 -- # wait 144424 00:31:31.283 10:45:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:31.283 10:45:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:31.283 00:31:31.283 real 0m6.078s 00:31:31.283 user 0m21.397s 00:31:31.283 sys 0m0.796s 00:31:31.283 ************************************ 00:31:31.283 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:31.283 ************************************ 00:31:31.283 10:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.283 10:45:24 -- common/autotest_common.sh@10 -- # set +x 00:31:31.283 10:45:24 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:31.283 10:45:24 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:31.283 10:45:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:31.283 10:45:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:31.283 10:45:24 -- common/autotest_common.sh@10 -- # set +x 00:31:31.283 ************************************ 00:31:31.283 START TEST nvme_fio 00:31:31.283 ************************************ 00:31:31.283 10:45:24 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:31.283 10:45:24 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:31.283 10:45:24 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:31.283 10:45:24 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:31:31.283 10:45:24 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:31.283 10:45:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:31.283 10:45:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:31.283 10:45:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:31.283 10:45:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:31.283 10:45:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:31.283 10:45:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:31.283 10:45:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:31.283 10:45:24 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:31.283 10:45:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:31.283 10:45:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:31.283 10:45:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:31.283 10:45:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:31.283 10:45:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:31.541 10:45:25 -- nvme/nvme.sh@41 -- # bs=4096 00:31:31.541 10:45:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:31.541 10:45:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:31.541 10:45:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:31.541 10:45:25 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:31.541 10:45:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:31.541 10:45:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:31.541 10:45:25 -- common/autotest_common.sh@1320 -- # shift 00:31:31.541 10:45:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:31.541 10:45:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.541 10:45:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:31.541 10:45:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:31.541 10:45:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:31.541 10:45:25 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:31.541 10:45:25 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:31.541 10:45:25 -- common/autotest_common.sh@1326 -- # break 00:31:31.541 10:45:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:31.541 10:45:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:31.541 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:31.541 fio-3.35 00:31:31.541 Starting 1 thread 00:31:34.827 00:31:34.827 test: (groupid=0, jobs=1): err= 0: pid=144621: Fri Jul 12 10:45:28 2024 00:31:34.827 read: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec) 00:31:34.827 slat (usec): min=3, max=138, avg= 5.63, stdev= 3.46 00:31:34.827 clat (usec): min=282, max=8693, avg=3871.73, stdev=310.13 00:31:34.827 lat (usec): min=302, max=8790, avg=3877.36, stdev=310.49 00:31:34.827 clat percentiles (usec): 00:31:34.827 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:31:34.827 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:31:34.827 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4424], 00:31:34.827 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 7111], 00:31:34.827 | 99.99th=[ 8455] 00:31:34.827 bw ( KiB/s): min=62800, max=67216, per=99.94%, avg=65618.67, stdev=2448.27, samples=3 00:31:34.827 iops : min=15700, max=16804, avg=16404.67, stdev=612.07, samples=3 00:31:34.827 write: IOPS=16.4k, BW=64.2MiB/s (67.4MB/s)(129MiB/2001msec); 0 zone resets 00:31:34.827 slat (nsec): min=3925, max=83144, avg=5857.95, stdev=3487.06 00:31:34.827 clat (usec): min=248, max=8503, avg=3889.69, stdev=316.61 00:31:34.827 lat (usec): min=252, max=8533, avg=3895.55, stdev=316.95 00:31:34.827 clat percentiles (usec): 00:31:34.827 | 1.00th=[ 3392], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3687], 00:31:34.827 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:31:34.827 | 70.00th=[ 3949], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4424], 00:31:34.827 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5997], 99.95th=[ 7308], 00:31:34.827 | 99.99th=[ 8291] 00:31:34.827 bw ( KiB/s): min=63168, max=66816, per=99.48%, avg=65437.33, stdev=1980.39, samples=3 00:31:34.827 iops : min=15792, max=16704, avg=16359.33, stdev=495.10, samples=3 00:31:34.827 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:31:34.827 lat (msec) : 2=0.05%, 4=77.13%, 10=22.78% 00:31:34.827 cpu : usr=99.80%, sys=0.15%, ctx=15, majf=0, minf=37 00:31:34.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:34.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:34.827 issued rwts: total=32845,32906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:34.827 00:31:34.827 Run status group 0 (all jobs): 00:31:34.827 READ: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (135MB), run=2001-2001msec 00:31:34.827 WRITE: bw=64.2MiB/s (67.4MB/s), 64.2MiB/s-64.2MiB/s (67.4MB/s-67.4MB/s), io=129MiB (135MB), run=2001-2001msec 00:31:35.086 ----------------------------------------------------- 00:31:35.086 Suppressions used: 00:31:35.086 count bytes template 00:31:35.086 1 32 /usr/src/fio/parse.c 00:31:35.086 ----------------------------------------------------- 00:31:35.086 00:31:35.086 ************************************ 00:31:35.086 END TEST nvme_fio 00:31:35.086 ************************************ 00:31:35.086 10:45:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:35.086 10:45:28 -- nvme/nvme.sh@46 -- # true 00:31:35.086 00:31:35.086 real 0m4.266s 00:31:35.086 user 0m3.502s 00:31:35.086 sys 0m0.431s 00:31:35.086 10:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.086 10:45:28 -- common/autotest_common.sh@10 -- # set +x 00:31:35.086 ************************************ 00:31:35.086 END TEST nvme 00:31:35.086 ************************************ 00:31:35.086 00:31:35.086 real 0m48.016s 00:31:35.086 user 2m7.032s 00:31:35.086 sys 0m8.604s 00:31:35.086 10:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.086 10:45:28 -- common/autotest_common.sh@10 -- # set +x 00:31:35.345 10:45:28 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:35.345 10:45:28 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:35.345 10:45:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:35.345 10:45:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:35.345 10:45:28 -- common/autotest_common.sh@10 -- # set +x 00:31:35.345 ************************************ 00:31:35.345 START TEST nvme_scc 00:31:35.345 ************************************ 00:31:35.345 10:45:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:35.345 * Looking for test storage... 00:31:35.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:35.345 10:45:29 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:35.345 10:45:29 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:35.345 10:45:29 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:35.345 10:45:29 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:35.345 10:45:29 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:35.345 10:45:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.345 10:45:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.345 10:45:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.345 10:45:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:35.345 10:45:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:35.345 10:45:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:35.345 10:45:29 -- paths/export.sh@5 -- # export PATH 00:31:35.345 10:45:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:35.345 10:45:29 -- nvme/functions.sh@10 -- # ctrls=() 00:31:35.345 10:45:29 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:35.345 10:45:29 -- nvme/functions.sh@11 -- # nvmes=() 00:31:35.345 10:45:29 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:35.345 10:45:29 -- nvme/functions.sh@12 -- # bdfs=() 00:31:35.345 10:45:29 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:35.345 10:45:29 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:35.345 10:45:29 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:35.345 10:45:29 -- nvme/functions.sh@14 -- # nvme_name= 00:31:35.345 10:45:29 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:35.345 10:45:29 -- nvme/nvme_scc.sh@12 -- # uname 00:31:35.345 10:45:29 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:35.345 10:45:29 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:35.345 10:45:29 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:35.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:35.604 Waiting for block devices as requested 00:31:35.604 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:35.865 10:45:29 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:35.865 10:45:29 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:35.865 10:45:29 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:35.865 10:45:29 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:35.865 10:45:29 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:35.865 10:45:29 -- scripts/common.sh@15 -- # local i 00:31:35.865 10:45:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:35.865 10:45:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:35.865 10:45:29 -- scripts/common.sh@24 -- # return 0 00:31:35.865 10:45:29 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:35.865 10:45:29 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:35.865 10:45:29 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@18 -- # shift 00:31:35.865 10:45:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.865 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:35.865 10:45:29 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:35.865 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:35.866 10:45:29 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.866 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.866 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.867 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.867 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:35.867 10:45:29 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:35.868 10:45:29 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:35.868 10:45:29 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:35.868 10:45:29 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:35.868 10:45:29 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@18 -- # shift 00:31:35.868 10:45:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.868 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.868 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:35.868 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:35.869 10:45:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:35.869 10:45:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:35.869 10:45:29 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:35.869 10:45:29 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:35.869 10:45:29 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:35.869 10:45:29 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:35.869 10:45:29 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:35.869 10:45:29 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:35.869 10:45:29 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:35.869 10:45:29 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:35.869 10:45:29 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:35.869 10:45:29 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:35.869 10:45:29 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:35.869 10:45:29 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:35.869 10:45:29 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:35.869 10:45:29 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:35.869 10:45:29 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:35.869 10:45:29 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:35.869 10:45:29 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:35.869 10:45:29 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:35.869 10:45:29 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:35.869 10:45:29 -- nvme/functions.sh@197 -- # echo nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:35.869 10:45:29 -- nvme/functions.sh@206 -- # echo nvme0 00:31:35.869 10:45:29 -- nvme/functions.sh@207 -- # return 0 00:31:35.869 10:45:29 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:35.869 10:45:29 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:35.869 10:45:29 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:36.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:36.387 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:37.765 10:45:31 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:37.765 10:45:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:37.765 10:45:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.765 10:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.765 ************************************ 00:31:37.765 START TEST nvme_simple_copy 00:31:37.765 ************************************ 00:31:37.765 10:45:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:38.024 Initializing NVMe Controllers 00:31:38.024 Attaching to 0000:00:06.0 00:31:38.024 Controller supports SCC. Attached to 0000:00:06.0 00:31:38.024 Namespace ID: 1 size: 5GB 00:31:38.024 Initialization complete. 00:31:38.024 00:31:38.024 Controller QEMU NVMe Ctrl (12340 ) 00:31:38.024 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:38.024 Namespace Block Size:4096 00:31:38.024 Writing LBAs 0 to 63 with Random Data 00:31:38.024 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:38.024 LBAs matching Written Data: 64 00:31:38.024 00:31:38.024 real 0m0.320s 00:31:38.024 user 0m0.149s 00:31:38.024 sys 0m0.073s 00:31:38.024 ************************************ 00:31:38.024 END TEST nvme_simple_copy 00:31:38.024 ************************************ 00:31:38.024 10:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:38.024 10:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:38.283 ************************************ 00:31:38.283 END TEST nvme_scc 00:31:38.283 ************************************ 00:31:38.283 00:31:38.283 real 0m2.930s 00:31:38.283 user 0m0.732s 00:31:38.283 sys 0m2.041s 00:31:38.283 10:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:38.283 10:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:38.283 10:45:31 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:38.283 10:45:31 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:38.283 10:45:31 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:38.283 10:45:31 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:38.283 10:45:31 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:38.284 10:45:31 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:38.284 10:45:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:38.284 10:45:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:38.284 10:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:38.284 ************************************ 00:31:38.284 START TEST nvme_rpc 00:31:38.284 ************************************ 00:31:38.284 10:45:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:38.284 * Looking for test storage... 00:31:38.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:38.284 10:45:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:38.284 10:45:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:38.284 10:45:32 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:38.284 10:45:32 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:38.284 10:45:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:38.284 10:45:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:38.284 10:45:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:38.284 10:45:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:38.284 10:45:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:38.284 10:45:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:38.284 10:45:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:38.284 10:45:32 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=145119 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:38.284 10:45:32 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 145119 00:31:38.284 10:45:32 -- common/autotest_common.sh@819 -- # '[' -z 145119 ']' 00:31:38.284 10:45:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.284 10:45:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:38.284 10:45:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.284 10:45:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:38.284 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:31:38.543 [2024-07-12 10:45:32.236129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:38.544 [2024-07-12 10:45:32.236328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145119 ] 00:31:38.544 [2024-07-12 10:45:32.418300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:38.803 [2024-07-12 10:45:32.603047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:38.803 [2024-07-12 10:45:32.603609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.803 [2024-07-12 10:45:32.603617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.177 10:45:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.177 10:45:33 -- common/autotest_common.sh@852 -- # return 0 00:31:40.177 10:45:33 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:40.435 Nvme0n1 00:31:40.435 10:45:34 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:40.435 10:45:34 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:40.435 request: 00:31:40.435 { 00:31:40.435 "filename": "non_existing_file", 00:31:40.435 "bdev_name": "Nvme0n1", 00:31:40.435 "method": "bdev_nvme_apply_firmware", 00:31:40.435 "req_id": 1 00:31:40.435 } 00:31:40.435 Got JSON-RPC error response 00:31:40.435 response: 00:31:40.435 { 00:31:40.435 "code": -32603, 00:31:40.435 "message": "open file failed." 00:31:40.435 } 00:31:40.435 10:45:34 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:40.435 10:45:34 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:40.435 10:45:34 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:40.694 10:45:34 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:40.694 10:45:34 -- nvme/nvme_rpc.sh@40 -- # killprocess 145119 00:31:40.694 10:45:34 -- common/autotest_common.sh@926 -- # '[' -z 145119 ']' 00:31:40.694 10:45:34 -- common/autotest_common.sh@930 -- # kill -0 145119 00:31:40.694 10:45:34 -- common/autotest_common.sh@931 -- # uname 00:31:40.694 10:45:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:40.694 10:45:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145119 00:31:40.694 10:45:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:40.694 killing process with pid 145119 00:31:40.694 10:45:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:40.694 10:45:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145119' 00:31:40.694 10:45:34 -- common/autotest_common.sh@945 -- # kill 145119 00:31:40.694 10:45:34 -- common/autotest_common.sh@950 -- # wait 145119 00:31:42.597 ************************************ 00:31:42.597 END TEST nvme_rpc 00:31:42.597 ************************************ 00:31:42.597 00:31:42.597 real 0m4.358s 00:31:42.597 user 0m8.273s 00:31:42.597 sys 0m0.673s 00:31:42.597 10:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.598 10:45:36 -- common/autotest_common.sh@10 -- # set +x 00:31:42.598 10:45:36 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:42.598 10:45:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:42.598 10:45:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:42.598 10:45:36 -- common/autotest_common.sh@10 -- # set +x 00:31:42.598 ************************************ 00:31:42.598 START TEST nvme_rpc_timeouts 00:31:42.598 ************************************ 00:31:42.598 10:45:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:42.598 * Looking for test storage... 00:31:42.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_145230 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_145230 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=145254 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:42.598 10:45:36 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 145254 00:31:42.598 10:45:36 -- common/autotest_common.sh@819 -- # '[' -z 145254 ']' 00:31:42.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.598 10:45:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.598 10:45:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:42.598 10:45:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.598 10:45:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:42.598 10:45:36 -- common/autotest_common.sh@10 -- # set +x 00:31:42.857 [2024-07-12 10:45:36.562288] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:42.857 [2024-07-12 10:45:36.562499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145254 ] 00:31:42.857 [2024-07-12 10:45:36.735154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:43.116 [2024-07-12 10:45:36.919952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.116 [2024-07-12 10:45:36.920308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.116 [2024-07-12 10:45:36.920327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.495 10:45:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:44.495 10:45:38 -- common/autotest_common.sh@852 -- # return 0 00:31:44.495 10:45:38 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:44.495 Checking default timeout settings: 00:31:44.495 10:45:38 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:44.754 10:45:38 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:44.754 Making settings changes with rpc: 00:31:44.754 10:45:38 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:45.012 10:45:38 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:45.013 Check default vs. modified settings: 00:31:45.013 10:45:38 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_145230 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_145230 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:45.271 Setting action_on_timeout is changed as expected. 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_145230 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_145230 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:45.271 10:45:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:45.272 Setting timeout_us is changed as expected. 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_145230 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_145230 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:45.272 Setting timeout_admin_us is changed as expected. 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_145230 /tmp/settings_modified_145230 00:31:45.272 10:45:38 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 145254 00:31:45.272 10:45:38 -- common/autotest_common.sh@926 -- # '[' -z 145254 ']' 00:31:45.272 10:45:38 -- common/autotest_common.sh@930 -- # kill -0 145254 00:31:45.272 10:45:38 -- common/autotest_common.sh@931 -- # uname 00:31:45.272 10:45:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:45.272 10:45:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145254 00:31:45.272 10:45:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:45.272 10:45:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:45.272 10:45:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145254' 00:31:45.272 killing process with pid 145254 00:31:45.272 10:45:39 -- common/autotest_common.sh@945 -- # kill 145254 00:31:45.272 10:45:39 -- common/autotest_common.sh@950 -- # wait 145254 00:31:47.172 RPC TIMEOUT SETTING TEST PASSED. 00:31:47.172 10:45:40 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:47.172 00:31:47.172 real 0m4.522s 00:31:47.172 user 0m8.655s 00:31:47.172 sys 0m0.727s 00:31:47.172 10:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.172 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:31:47.172 ************************************ 00:31:47.172 END TEST nvme_rpc_timeouts 00:31:47.172 ************************************ 00:31:47.172 10:45:40 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:47.172 10:45:40 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:47.172 10:45:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:47.172 10:45:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:47.172 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:31:47.172 10:45:41 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:47.172 10:45:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:47.172 10:45:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:47.172 10:45:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:47.172 10:45:41 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:47.173 10:45:41 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:47.173 10:45:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:47.173 10:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:47.173 10:45:41 -- common/autotest_common.sh@10 -- # set +x 00:31:47.173 ************************************ 00:31:47.173 START TEST blockdev_raid5f 00:31:47.173 ************************************ 00:31:47.173 10:45:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:47.431 * Looking for test storage... 00:31:47.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:47.431 10:45:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:47.431 10:45:41 -- bdev/nbd_common.sh@6 -- # set -e 00:31:47.431 10:45:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:47.431 10:45:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:47.431 10:45:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:47.431 10:45:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:47.431 10:45:41 -- bdev/blockdev.sh@18 -- # : 00:31:47.431 10:45:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:47.431 10:45:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:47.431 10:45:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:47.431 10:45:41 -- bdev/blockdev.sh@672 -- # uname -s 00:31:47.431 10:45:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:47.431 10:45:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:47.431 10:45:41 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:47.431 10:45:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:47.431 10:45:41 -- bdev/blockdev.sh@682 -- # dek= 00:31:47.431 10:45:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:47.431 10:45:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:47.431 10:45:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:47.431 10:45:41 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:47.431 10:45:41 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:47.431 10:45:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:47.431 10:45:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145410 00:31:47.431 10:45:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:47.431 10:45:41 -- bdev/blockdev.sh@47 -- # waitforlisten 145410 00:31:47.431 10:45:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:47.431 10:45:41 -- common/autotest_common.sh@819 -- # '[' -z 145410 ']' 00:31:47.431 10:45:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.431 10:45:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:47.431 10:45:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.431 10:45:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:47.431 10:45:41 -- common/autotest_common.sh@10 -- # set +x 00:31:47.431 [2024-07-12 10:45:41.189825] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:47.431 [2024-07-12 10:45:41.190037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145410 ] 00:31:47.690 [2024-07-12 10:45:41.360556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.690 [2024-07-12 10:45:41.543643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:47.690 [2024-07-12 10:45:41.543865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.068 10:45:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.068 10:45:42 -- common/autotest_common.sh@852 -- # return 0 00:31:49.068 10:45:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:49.068 10:45:42 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:49.068 10:45:42 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:49.068 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.068 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.068 Malloc0 00:31:49.068 Malloc1 00:31:49.068 Malloc2 00:31:49.068 10:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.068 10:45:42 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:49.068 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.068 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.068 10:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.068 10:45:42 -- bdev/blockdev.sh@738 -- # cat 00:31:49.068 10:45:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:49.068 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.068 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.068 10:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.068 10:45:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:49.068 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.068 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.328 10:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.328 10:45:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:49.328 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.328 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.328 10:45:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.328 10:45:42 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:49.328 10:45:42 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:49.328 10:45:42 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:49.328 10:45:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.328 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:31:49.328 10:45:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.328 10:45:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:49.328 10:45:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3414332d-e444-4f0e-b447-5fd0bb1ffdee"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3414332d-e444-4f0e-b447-5fd0bb1ffdee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3414332d-e444-4f0e-b447-5fd0bb1ffdee",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f3020fec-ee34-4318-9dd5-ec8124993b85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "604e0d9b-9e1d-4c7c-a025-e7fcfe695c20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "73c1f616-e847-4b2a-b903-3e11c7b24629",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:49.328 10:45:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:49.328 10:45:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:49.328 10:45:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:49.328 10:45:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:49.328 10:45:43 -- bdev/blockdev.sh@752 -- # killprocess 145410 00:31:49.328 10:45:43 -- common/autotest_common.sh@926 -- # '[' -z 145410 ']' 00:31:49.328 10:45:43 -- common/autotest_common.sh@930 -- # kill -0 145410 00:31:49.328 10:45:43 -- common/autotest_common.sh@931 -- # uname 00:31:49.328 10:45:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:49.328 10:45:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145410 00:31:49.328 10:45:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:49.328 10:45:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:49.328 killing process with pid 145410 00:31:49.328 10:45:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145410' 00:31:49.328 10:45:43 -- common/autotest_common.sh@945 -- # kill 145410 00:31:49.328 10:45:43 -- common/autotest_common.sh@950 -- # wait 145410 00:31:51.858 10:45:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:51.858 10:45:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:51.858 10:45:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:51.858 10:45:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.858 10:45:45 -- common/autotest_common.sh@10 -- # set +x 00:31:51.858 ************************************ 00:31:51.858 START TEST bdev_hello_world 00:31:51.858 ************************************ 00:31:51.858 10:45:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:51.858 [2024-07-12 10:45:45.339556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:51.858 [2024-07-12 10:45:45.339776] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145489 ] 00:31:51.858 [2024-07-12 10:45:45.511733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.858 [2024-07-12 10:45:45.705883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.430 [2024-07-12 10:45:46.200150] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:52.430 [2024-07-12 10:45:46.200241] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:52.430 [2024-07-12 10:45:46.200273] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:52.430 [2024-07-12 10:45:46.200747] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:52.430 [2024-07-12 10:45:46.200917] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:52.430 [2024-07-12 10:45:46.200952] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:52.430 [2024-07-12 10:45:46.201022] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:52.430 00:31:52.430 [2024-07-12 10:45:46.201052] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:53.829 00:31:53.829 real 0m2.133s 00:31:53.829 user 0m1.710s 00:31:53.829 sys 0m0.301s 00:31:53.829 10:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.829 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:31:53.829 ************************************ 00:31:53.829 END TEST bdev_hello_world 00:31:53.829 ************************************ 00:31:53.829 10:45:47 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:53.829 10:45:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:53.829 10:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:53.829 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:31:53.829 ************************************ 00:31:53.829 START TEST bdev_bounds 00:31:53.829 ************************************ 00:31:53.829 10:45:47 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:53.829 10:45:47 -- bdev/blockdev.sh@288 -- # bdevio_pid=145551 00:31:53.829 10:45:47 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:53.829 10:45:47 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 145551' 00:31:53.829 Process bdevio pid: 145551 00:31:53.829 10:45:47 -- bdev/blockdev.sh@291 -- # waitforlisten 145551 00:31:53.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.829 10:45:47 -- common/autotest_common.sh@819 -- # '[' -z 145551 ']' 00:31:53.829 10:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.829 10:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.829 10:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.830 10:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.830 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:31:53.830 10:45:47 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:53.830 [2024-07-12 10:45:47.522317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:53.830 [2024-07-12 10:45:47.522691] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145551 ] 00:31:53.830 [2024-07-12 10:45:47.698662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.116 [2024-07-12 10:45:47.889308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.116 [2024-07-12 10:45:47.889440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.116 [2024-07-12 10:45:47.889439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.500 10:45:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.500 10:45:49 -- common/autotest_common.sh@852 -- # return 0 00:31:55.500 10:45:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:55.500 I/O targets: 00:31:55.500 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:55.500 00:31:55.500 00:31:55.500 CUnit - A unit testing framework for C - Version 2.1-3 00:31:55.500 http://cunit.sourceforge.net/ 00:31:55.500 00:31:55.500 00:31:55.500 Suite: bdevio tests on: raid5f 00:31:55.500 Test: blockdev write read block ...passed 00:31:55.500 Test: blockdev write zeroes read block ...passed 00:31:55.500 Test: blockdev write zeroes read no split ...passed 00:31:55.500 Test: blockdev write zeroes read split ...passed 00:31:55.500 Test: blockdev write zeroes read split partial ...passed 00:31:55.500 Test: blockdev reset ...passed 00:31:55.500 Test: blockdev write read 8 blocks ...passed 00:31:55.500 Test: blockdev write read size > 128k ...passed 00:31:55.500 Test: blockdev write read invalid size ...passed 00:31:55.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:55.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:55.500 Test: blockdev write read max offset ...passed 00:31:55.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:55.500 Test: blockdev writev readv 8 blocks ...passed 00:31:55.500 Test: blockdev writev readv 30 x 1block ...passed 00:31:55.500 Test: blockdev writev readv block ...passed 00:31:55.500 Test: blockdev writev readv size > 128k ...passed 00:31:55.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:55.500 Test: blockdev comparev and writev ...passed 00:31:55.500 Test: blockdev nvme passthru rw ...passed 00:31:55.500 Test: blockdev nvme passthru vendor specific ...passed 00:31:55.500 Test: blockdev nvme admin passthru ...passed 00:31:55.500 Test: blockdev copy ...passed 00:31:55.500 00:31:55.500 Run Summary: Type Total Ran Passed Failed Inactive 00:31:55.500 suites 1 1 n/a 0 0 00:31:55.501 tests 23 23 23 0 0 00:31:55.501 asserts 130 130 130 0 n/a 00:31:55.501 00:31:55.501 Elapsed time = 0.436 seconds 00:31:55.501 0 00:31:55.501 10:45:49 -- bdev/blockdev.sh@293 -- # killprocess 145551 00:31:55.501 10:45:49 -- common/autotest_common.sh@926 -- # '[' -z 145551 ']' 00:31:55.501 10:45:49 -- common/autotest_common.sh@930 -- # kill -0 145551 00:31:55.501 10:45:49 -- common/autotest_common.sh@931 -- # uname 00:31:55.501 10:45:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:55.501 10:45:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145551 00:31:55.759 10:45:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:55.759 10:45:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:55.759 10:45:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145551' 00:31:55.759 killing process with pid 145551 00:31:55.759 10:45:49 -- common/autotest_common.sh@945 -- # kill 145551 00:31:55.759 10:45:49 -- common/autotest_common.sh@950 -- # wait 145551 00:31:57.137 10:45:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:57.137 00:31:57.137 real 0m3.167s 00:31:57.137 user 0m7.981s 00:31:57.137 sys 0m0.435s 00:31:57.137 10:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.137 10:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:57.137 ************************************ 00:31:57.137 END TEST bdev_bounds 00:31:57.137 ************************************ 00:31:57.137 10:45:50 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:57.137 10:45:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:57.137 10:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:57.137 10:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:57.137 ************************************ 00:31:57.137 START TEST bdev_nbd 00:31:57.137 ************************************ 00:31:57.137 10:45:50 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:57.137 10:45:50 -- bdev/blockdev.sh@298 -- # uname -s 00:31:57.137 10:45:50 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:57.137 10:45:50 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.137 10:45:50 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:57.137 10:45:50 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:31:57.137 10:45:50 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:57.137 10:45:50 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:57.137 10:45:50 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:57.137 10:45:50 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:31:57.137 10:45:50 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:57.137 10:45:50 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:57.137 10:45:50 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:31:57.137 10:45:50 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:57.137 10:45:50 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:31:57.137 10:45:50 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:57.137 10:45:50 -- bdev/blockdev.sh@316 -- # nbd_pid=145627 00:31:57.137 10:45:50 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:57.137 10:45:50 -- bdev/blockdev.sh@318 -- # waitforlisten 145627 /var/tmp/spdk-nbd.sock 00:31:57.137 10:45:50 -- common/autotest_common.sh@819 -- # '[' -z 145627 ']' 00:31:57.137 10:45:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:57.137 10:45:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:57.137 10:45:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:57.137 10:45:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:57.137 10:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:57.137 10:45:50 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:57.137 [2024-07-12 10:45:50.731564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:57.137 [2024-07-12 10:45:50.731915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.137 [2024-07-12 10:45:50.881029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.396 [2024-07-12 10:45:51.060925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.963 10:45:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:57.963 10:45:51 -- common/autotest_common.sh@852 -- # return 0 00:31:57.963 10:45:51 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@24 -- # local i 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:57.963 10:45:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:58.221 10:45:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:58.221 10:45:51 -- common/autotest_common.sh@857 -- # local i 00:31:58.221 10:45:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:58.221 10:45:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:58.221 10:45:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:58.221 10:45:51 -- common/autotest_common.sh@861 -- # break 00:31:58.221 10:45:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:58.221 10:45:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:58.221 10:45:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:58.221 1+0 records in 00:31:58.221 1+0 records out 00:31:58.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606043 s, 6.8 MB/s 00:31:58.221 10:45:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.221 10:45:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:58.221 10:45:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.221 10:45:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:58.221 10:45:51 -- common/autotest_common.sh@877 -- # return 0 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:58.221 10:45:51 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:58.221 10:45:52 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:58.221 { 00:31:58.221 "nbd_device": "/dev/nbd0", 00:31:58.221 "bdev_name": "raid5f" 00:31:58.221 } 00:31:58.221 ]' 00:31:58.221 10:45:52 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:58.221 10:45:52 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:58.221 { 00:31:58.221 "nbd_device": "/dev/nbd0", 00:31:58.221 "bdev_name": "raid5f" 00:31:58.221 } 00:31:58.221 ]' 00:31:58.221 10:45:52 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@51 -- # local i 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:58.480 10:45:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@41 -- # break 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@45 -- # return 0 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.738 10:45:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@65 -- # true 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@65 -- # count=0 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@122 -- # count=0 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@127 -- # return 0 00:31:58.997 10:45:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@12 -- # local i 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:58.997 10:45:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:59.256 /dev/nbd0 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:59.256 10:45:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:59.256 10:45:53 -- common/autotest_common.sh@857 -- # local i 00:31:59.256 10:45:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:59.256 10:45:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:59.256 10:45:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:59.256 10:45:53 -- common/autotest_common.sh@861 -- # break 00:31:59.256 10:45:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:59.256 10:45:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:59.256 10:45:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.256 1+0 records in 00:31:59.256 1+0 records out 00:31:59.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975867 s, 4.2 MB/s 00:31:59.256 10:45:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.256 10:45:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:59.256 10:45:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.256 10:45:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:59.256 10:45:53 -- common/autotest_common.sh@877 -- # return 0 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:59.256 10:45:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:59.257 10:45:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:59.514 { 00:31:59.514 "nbd_device": "/dev/nbd0", 00:31:59.514 "bdev_name": "raid5f" 00:31:59.514 } 00:31:59.514 ]' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:59.514 { 00:31:59.514 "nbd_device": "/dev/nbd0", 00:31:59.514 "bdev_name": "raid5f" 00:31:59.514 } 00:31:59.514 ]' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@65 -- # count=1 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@95 -- # count=1 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:59.514 256+0 records in 00:31:59.514 256+0 records out 00:31:59.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054924 s, 191 MB/s 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:59.514 256+0 records in 00:31:59.514 256+0 records out 00:31:59.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286891 s, 36.5 MB/s 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@51 -- # local i 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:59.514 10:45:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:59.772 10:45:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@41 -- # break 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.030 10:45:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@65 -- # true 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@65 -- # count=0 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@104 -- # count=0 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@109 -- # return 0 00:32:00.288 10:45:54 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:00.288 10:45:54 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:00.547 malloc_lvol_verify 00:32:00.547 10:45:54 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:00.805 0b0a07c1-5ddf-4f4a-8d35-58aa35a1a4a9 00:32:00.805 10:45:54 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:01.063 87efeefe-db3a-4227-b4e9-9b3123b237bc 00:32:01.063 10:45:54 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:01.321 /dev/nbd0 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:01.321 mke2fs 1.45.5 (07-Jan-2020) 00:32:01.321 00:32:01.321 Filesystem too small for a journal 00:32:01.321 Creating filesystem with 1024 4k blocks and 1024 inodes 00:32:01.321 00:32:01.321 Allocating group tables: 0/1 done 00:32:01.321 Writing inode tables: 0/1 done 00:32:01.321 Writing superblocks and filesystem accounting information: 0/1 done 00:32:01.321 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@51 -- # local i 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:01.321 10:45:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@41 -- # break 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@45 -- # return 0 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:01.580 10:45:55 -- bdev/nbd_common.sh@147 -- # return 0 00:32:01.580 10:45:55 -- bdev/blockdev.sh@324 -- # killprocess 145627 00:32:01.580 10:45:55 -- common/autotest_common.sh@926 -- # '[' -z 145627 ']' 00:32:01.580 10:45:55 -- common/autotest_common.sh@930 -- # kill -0 145627 00:32:01.580 10:45:55 -- common/autotest_common.sh@931 -- # uname 00:32:01.580 10:45:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:01.580 10:45:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145627 00:32:01.580 10:45:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:01.580 killing process with pid 145627 00:32:01.580 10:45:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:01.580 10:45:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145627' 00:32:01.580 10:45:55 -- common/autotest_common.sh@945 -- # kill 145627 00:32:01.580 10:45:55 -- common/autotest_common.sh@950 -- # wait 145627 00:32:02.956 10:45:56 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:32:02.956 00:32:02.956 real 0m5.943s 00:32:02.956 user 0m8.421s 00:32:02.956 sys 0m1.077s 00:32:02.956 10:45:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.956 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:32:02.956 ************************************ 00:32:02.956 END TEST bdev_nbd 00:32:02.956 ************************************ 00:32:02.956 10:45:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:32:02.956 10:45:56 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.956 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:32:02.956 ************************************ 00:32:02.956 START TEST bdev_fio 00:32:02.956 ************************************ 00:32:02.956 10:45:56 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@329 -- # local env_context 00:32:02.956 10:45:56 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:02.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:02.956 10:45:56 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:02.956 10:45:56 -- bdev/blockdev.sh@337 -- # echo '' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:32:02.956 10:45:56 -- bdev/blockdev.sh@337 -- # env_context= 00:32:02.956 10:45:56 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:02.956 10:45:56 -- common/autotest_common.sh@1260 -- # local workload=verify 00:32:02.956 10:45:56 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:32:02.956 10:45:56 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:02.956 10:45:56 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:02.956 10:45:56 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:02.956 10:45:56 -- common/autotest_common.sh@1280 -- # cat 00:32:02.956 10:45:56 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1293 -- # cat 00:32:02.956 10:45:56 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:32:02.956 10:45:56 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:02.956 10:45:56 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:32:02.956 10:45:56 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:02.956 10:45:56 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:32:02.956 10:45:56 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:02.956 10:45:56 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:02.956 10:45:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.956 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:32:02.956 ************************************ 00:32:02.956 START TEST bdev_fio_rw_verify 00:32:02.956 ************************************ 00:32:02.956 10:45:56 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:02.956 10:45:56 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:02.956 10:45:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:02.956 10:45:56 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:32:02.956 10:45:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:02.956 10:45:56 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:02.956 10:45:56 -- common/autotest_common.sh@1320 -- # shift 00:32:02.956 10:45:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:02.956 10:45:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.956 10:45:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:02.956 10:45:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:02.956 10:45:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:32:02.956 10:45:56 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:32:02.956 10:45:56 -- common/autotest_common.sh@1326 -- # break 00:32:02.956 10:45:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:02.956 10:45:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:03.215 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:03.215 fio-3.35 00:32:03.215 Starting 1 thread 00:32:15.424 00:32:15.425 job_raid5f: (groupid=0, jobs=1): err= 0: pid=145896: Fri Jul 12 10:46:07 2024 00:32:15.425 read: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(461MiB/10000msec) 00:32:15.425 slat (usec): min=18, max=238, avg=19.95, stdev= 3.90 00:32:15.425 clat (usec): min=12, max=729, avg=134.55, stdev=50.66 00:32:15.425 lat (usec): min=32, max=777, avg=154.50, stdev=51.92 00:32:15.425 clat percentiles (usec): 00:32:15.425 | 50.000th=[ 139], 99.000th=[ 285], 99.900th=[ 359], 99.990th=[ 562], 00:32:15.425 | 99.999th=[ 693] 00:32:15.425 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(476MiB/9870msec); 0 zone resets 00:32:15.425 slat (usec): min=8, max=624, avg=17.63, stdev= 4.55 00:32:15.425 clat (usec): min=59, max=2479, avg=309.50, stdev=57.83 00:32:15.425 lat (usec): min=75, max=2497, avg=327.13, stdev=60.05 00:32:15.425 clat percentiles (usec): 00:32:15.425 | 50.000th=[ 310], 99.000th=[ 529], 99.900th=[ 922], 99.990th=[ 1057], 00:32:15.425 | 99.999th=[ 2474] 00:32:15.425 bw ( KiB/s): min=42520, max=53288, per=98.97%, avg=48912.84, stdev=3118.26, samples=19 00:32:15.425 iops : min=10630, max=13322, avg=12228.21, stdev=779.57, samples=19 00:32:15.425 lat (usec) : 20=0.01%, 50=0.01%, 100=14.72%, 250=38.45%, 500=46.03% 00:32:15.425 lat (usec) : 750=0.69%, 1000=0.09% 00:32:15.425 lat (msec) : 2=0.01%, 4=0.01% 00:32:15.425 cpu : usr=99.52%, sys=0.20%, ctx=1580, majf=0, minf=8373 00:32:15.425 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.425 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.425 issued rwts: total=118118,121949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:15.425 00:32:15.425 Run status group 0 (all jobs): 00:32:15.425 READ: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=461MiB (484MB), run=10000-10000msec 00:32:15.425 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=476MiB (500MB), run=9870-9870msec 00:32:15.425 ----------------------------------------------------- 00:32:15.425 Suppressions used: 00:32:15.425 count bytes template 00:32:15.425 1 7 /usr/src/fio/parse.c 00:32:15.425 183 17568 /usr/src/fio/iolog.c 00:32:15.425 2 596 libcrypto.so 00:32:15.425 ----------------------------------------------------- 00:32:15.425 00:32:15.425 00:32:15.425 real 0m12.347s 00:32:15.425 user 0m12.833s 00:32:15.425 sys 0m0.575s 00:32:15.425 10:46:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.425 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:15.425 ************************************ 00:32:15.425 END TEST bdev_fio_rw_verify 00:32:15.425 ************************************ 00:32:15.425 10:46:09 -- bdev/blockdev.sh@348 -- # rm -f 00:32:15.425 10:46:09 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:15.425 10:46:09 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:15.425 10:46:09 -- common/autotest_common.sh@1260 -- # local workload=trim 00:32:15.425 10:46:09 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:32:15.425 10:46:09 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:15.425 10:46:09 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:15.425 10:46:09 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:15.425 10:46:09 -- common/autotest_common.sh@1280 -- # cat 00:32:15.425 10:46:09 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:32:15.425 10:46:09 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:15.425 10:46:09 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3414332d-e444-4f0e-b447-5fd0bb1ffdee"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3414332d-e444-4f0e-b447-5fd0bb1ffdee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3414332d-e444-4f0e-b447-5fd0bb1ffdee",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f3020fec-ee34-4318-9dd5-ec8124993b85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "604e0d9b-9e1d-4c7c-a025-e7fcfe695c20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "73c1f616-e847-4b2a-b903-3e11c7b24629",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:15.425 10:46:09 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:32:15.425 10:46:09 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:15.425 /home/vagrant/spdk_repo/spdk 00:32:15.425 10:46:09 -- bdev/blockdev.sh@360 -- # popd 00:32:15.425 10:46:09 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:32:15.425 10:46:09 -- bdev/blockdev.sh@362 -- # return 0 00:32:15.425 00:32:15.425 real 0m12.520s 00:32:15.425 user 0m12.951s 00:32:15.425 sys 0m0.627s 00:32:15.425 10:46:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.425 ************************************ 00:32:15.425 END TEST bdev_fio 00:32:15.425 ************************************ 00:32:15.425 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:15.425 10:46:09 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:15.425 10:46:09 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:15.425 10:46:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:15.425 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:15.425 ************************************ 00:32:15.425 START TEST bdev_verify 00:32:15.425 ************************************ 00:32:15.425 10:46:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:15.425 [2024-07-12 10:46:09.329747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:15.425 [2024-07-12 10:46:09.329981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146081 ] 00:32:15.684 [2024-07-12 10:46:09.504287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:15.943 [2024-07-12 10:46:09.677221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.943 [2024-07-12 10:46:09.677219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.511 Running I/O for 5 seconds... 00:32:21.776 00:32:21.777 Latency(us) 00:32:21.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.777 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:21.777 Verification LBA range: start 0x0 length 0x2000 00:32:21.777 raid5f : 5.01 8432.44 32.94 0.00 0.00 24072.06 207.59 20375.74 00:32:21.777 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:21.777 Verification LBA range: start 0x2000 length 0x2000 00:32:21.777 raid5f : 5.01 8616.31 33.66 0.00 0.00 23551.77 203.87 20256.58 00:32:21.777 =================================================================================================================== 00:32:21.777 Total : 17048.75 66.60 0.00 0.00 23809.14 203.87 20375.74 00:32:22.709 00:32:22.709 real 0m7.026s 00:32:22.710 user 0m12.876s 00:32:22.710 sys 0m0.277s 00:32:22.710 10:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.710 10:46:16 -- common/autotest_common.sh@10 -- # set +x 00:32:22.710 ************************************ 00:32:22.710 END TEST bdev_verify 00:32:22.710 ************************************ 00:32:22.710 10:46:16 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:22.710 10:46:16 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:22.710 10:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:22.710 10:46:16 -- common/autotest_common.sh@10 -- # set +x 00:32:22.710 ************************************ 00:32:22.710 START TEST bdev_verify_big_io 00:32:22.710 ************************************ 00:32:22.710 10:46:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:22.710 [2024-07-12 10:46:16.400446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:22.710 [2024-07-12 10:46:16.400661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146207 ] 00:32:22.710 [2024-07-12 10:46:16.570761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:22.968 [2024-07-12 10:46:16.746763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.968 [2024-07-12 10:46:16.746772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.537 Running I/O for 5 seconds... 00:32:28.806 00:32:28.807 Latency(us) 00:32:28.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.807 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:28.807 Verification LBA range: start 0x0 length 0x200 00:32:28.807 raid5f : 5.16 619.73 38.73 0.00 0.00 5390869.24 177.80 170631.91 00:32:28.807 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:28.807 Verification LBA range: start 0x200 length 0x200 00:32:28.807 raid5f : 5.15 640.66 40.04 0.00 0.00 5223784.96 157.32 161099.40 00:32:28.807 =================================================================================================================== 00:32:28.807 Total : 1260.38 78.77 0.00 0.00 5306014.92 157.32 170631.91 00:32:29.740 00:32:29.740 real 0m7.132s 00:32:29.740 user 0m13.129s 00:32:29.740 sys 0m0.277s 00:32:29.740 ************************************ 00:32:29.740 END TEST bdev_verify_big_io 00:32:29.740 ************************************ 00:32:29.740 10:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.740 10:46:23 -- common/autotest_common.sh@10 -- # set +x 00:32:29.740 10:46:23 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:29.740 10:46:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:29.740 10:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:29.740 10:46:23 -- common/autotest_common.sh@10 -- # set +x 00:32:29.740 ************************************ 00:32:29.740 START TEST bdev_write_zeroes 00:32:29.740 ************************************ 00:32:29.740 10:46:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:29.740 [2024-07-12 10:46:23.581892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:29.740 [2024-07-12 10:46:23.582256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146308 ] 00:32:29.999 [2024-07-12 10:46:23.745410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.256 [2024-07-12 10:46:23.926477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.514 Running I/O for 1 seconds... 00:32:31.887 00:32:31.888 Latency(us) 00:32:31.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.888 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:31.888 raid5f : 1.00 28180.18 110.08 0.00 0.00 4527.34 1459.67 5600.35 00:32:31.888 =================================================================================================================== 00:32:31.888 Total : 28180.18 110.08 0.00 0.00 4527.34 1459.67 5600.35 00:32:32.824 00:32:32.824 real 0m2.959s 00:32:32.824 user 0m2.608s 00:32:32.824 sys 0m0.238s 00:32:32.824 10:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.824 10:46:26 -- common/autotest_common.sh@10 -- # set +x 00:32:32.824 ************************************ 00:32:32.824 END TEST bdev_write_zeroes 00:32:32.824 ************************************ 00:32:32.824 10:46:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:32.824 10:46:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:32.824 10:46:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:32.824 10:46:26 -- common/autotest_common.sh@10 -- # set +x 00:32:32.824 ************************************ 00:32:32.824 START TEST bdev_json_nonenclosed 00:32:32.824 ************************************ 00:32:32.824 10:46:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:32.824 [2024-07-12 10:46:26.595442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:32.824 [2024-07-12 10:46:26.595628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146383 ] 00:32:33.083 [2024-07-12 10:46:26.761918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.083 [2024-07-12 10:46:26.934449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.083 [2024-07-12 10:46:26.934647] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:33.083 [2024-07-12 10:46:26.934685] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:33.342 00:32:33.342 real 0m0.712s 00:32:33.342 user 0m0.481s 00:32:33.342 sys 0m0.127s 00:32:33.342 ************************************ 00:32:33.342 END TEST bdev_json_nonenclosed 00:32:33.342 ************************************ 00:32:33.342 10:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.342 10:46:27 -- common/autotest_common.sh@10 -- # set +x 00:32:33.601 10:46:27 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:33.601 10:46:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:33.601 10:46:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:33.601 10:46:27 -- common/autotest_common.sh@10 -- # set +x 00:32:33.601 ************************************ 00:32:33.601 START TEST bdev_json_nonarray 00:32:33.601 ************************************ 00:32:33.601 10:46:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:33.601 [2024-07-12 10:46:27.354511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:33.601 [2024-07-12 10:46:27.354898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146421 ] 00:32:33.859 [2024-07-12 10:46:27.522237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.859 [2024-07-12 10:46:27.696827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.859 [2024-07-12 10:46:27.697042] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:33.859 [2024-07-12 10:46:27.697084] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:34.118 00:32:34.118 real 0m0.722s 00:32:34.118 user 0m0.490s 00:32:34.118 sys 0m0.132s 00:32:34.118 10:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.118 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:34.118 ************************************ 00:32:34.118 END TEST bdev_json_nonarray 00:32:34.118 ************************************ 00:32:34.375 10:46:28 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:34.375 10:46:28 -- bdev/blockdev.sh@809 -- # cleanup 00:32:34.375 10:46:28 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:34.375 10:46:28 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:34.375 10:46:28 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:34.375 10:46:28 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:34.375 00:32:34.375 real 0m47.037s 00:32:34.375 user 1m5.215s 00:32:34.375 sys 0m4.347s 00:32:34.375 10:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.375 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:34.375 ************************************ 00:32:34.375 END TEST blockdev_raid5f 00:32:34.375 ************************************ 00:32:34.375 10:46:28 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:34.375 10:46:28 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:34.375 10:46:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:34.375 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:34.375 10:46:28 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:34.375 10:46:28 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:34.375 10:46:28 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:34.375 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:36.008 Waiting for block devices as requested 00:32:36.008 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:36.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:36.588 Cleaning 00:32:36.588 Removing: /var/run/dpdk/spdk0/config 00:32:36.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:36.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:36.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:36.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:36.588 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:36.588 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:36.588 Removing: /dev/shm/spdk_tgt_trace.pid105222 00:32:36.588 Removing: /var/run/dpdk/spdk0 00:32:36.588 Removing: /var/run/dpdk/spdk_pid104954 00:32:36.588 Removing: /var/run/dpdk/spdk_pid105222 00:32:36.588 Removing: /var/run/dpdk/spdk_pid105525 00:32:36.588 Removing: /var/run/dpdk/spdk_pid105813 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106009 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106118 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106256 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106382 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106496 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106554 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106604 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106680 00:32:36.588 Removing: /var/run/dpdk/spdk_pid106798 00:32:36.588 Removing: /var/run/dpdk/spdk_pid107375 00:32:36.588 Removing: /var/run/dpdk/spdk_pid107481 00:32:36.588 Removing: /var/run/dpdk/spdk_pid107568 00:32:36.589 Removing: /var/run/dpdk/spdk_pid107596 00:32:36.589 Removing: /var/run/dpdk/spdk_pid107759 00:32:36.589 Removing: /var/run/dpdk/spdk_pid107782 00:32:36.589 Removing: /var/run/dpdk/spdk_pid107944 00:32:36.589 Removing: /var/run/dpdk/spdk_pid107972 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108053 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108090 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108159 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108191 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108398 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108448 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108486 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108593 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108684 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108728 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108820 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108854 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108917 00:32:36.589 Removing: /var/run/dpdk/spdk_pid108963 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109010 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109044 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109113 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109147 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109194 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109228 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109298 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109338 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109385 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109426 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109474 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109525 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109585 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109618 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109672 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109720 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109767 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109808 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109856 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109912 00:32:36.589 Removing: /var/run/dpdk/spdk_pid109967 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110004 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110050 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110086 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110154 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110193 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110247 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110281 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110349 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110391 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110448 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110496 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110561 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110602 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110650 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110683 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110760 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110838 00:32:36.589 Removing: /var/run/dpdk/spdk_pid110965 00:32:36.589 Removing: /var/run/dpdk/spdk_pid111166 00:32:36.589 Removing: /var/run/dpdk/spdk_pid111257 00:32:36.589 Removing: /var/run/dpdk/spdk_pid111313 00:32:36.589 Removing: /var/run/dpdk/spdk_pid112672 00:32:36.589 Removing: /var/run/dpdk/spdk_pid112909 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113137 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113281 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113427 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113501 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113532 00:32:36.589 Removing: /var/run/dpdk/spdk_pid113570 00:32:36.589 Removing: /var/run/dpdk/spdk_pid114086 00:32:36.589 Removing: /var/run/dpdk/spdk_pid114198 00:32:36.589 Removing: /var/run/dpdk/spdk_pid114306 00:32:36.589 Removing: /var/run/dpdk/spdk_pid114387 00:32:36.589 Removing: /var/run/dpdk/spdk_pid115627 00:32:36.589 Removing: /var/run/dpdk/spdk_pid116569 00:32:36.867 Removing: /var/run/dpdk/spdk_pid117504 00:32:36.867 Removing: /var/run/dpdk/spdk_pid118688 00:32:36.867 Removing: /var/run/dpdk/spdk_pid119824 00:32:36.867 Removing: /var/run/dpdk/spdk_pid120937 00:32:36.867 Removing: /var/run/dpdk/spdk_pid122494 00:32:36.867 Removing: /var/run/dpdk/spdk_pid123760 00:32:36.867 Removing: /var/run/dpdk/spdk_pid125028 00:32:36.868 Removing: /var/run/dpdk/spdk_pid125734 00:32:36.868 Removing: /var/run/dpdk/spdk_pid126306 00:32:36.868 Removing: /var/run/dpdk/spdk_pid126975 00:32:36.868 Removing: /var/run/dpdk/spdk_pid127514 00:32:36.868 Removing: /var/run/dpdk/spdk_pid128108 00:32:36.868 Removing: /var/run/dpdk/spdk_pid128689 00:32:36.868 Removing: /var/run/dpdk/spdk_pid129395 00:32:36.868 Removing: /var/run/dpdk/spdk_pid129974 00:32:36.868 Removing: /var/run/dpdk/spdk_pid131415 00:32:36.868 Removing: /var/run/dpdk/spdk_pid132050 00:32:36.868 Removing: /var/run/dpdk/spdk_pid132635 00:32:36.868 Removing: /var/run/dpdk/spdk_pid134239 00:32:36.868 Removing: /var/run/dpdk/spdk_pid134940 00:32:36.868 Removing: /var/run/dpdk/spdk_pid135603 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136413 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136474 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136543 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136601 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136732 00:32:36.868 Removing: /var/run/dpdk/spdk_pid136898 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137106 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137417 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137432 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137483 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137518 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137547 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137597 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137618 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137646 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137678 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137698 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137748 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137775 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137807 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137828 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137860 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137887 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137935 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137963 00:32:36.868 Removing: /var/run/dpdk/spdk_pid137987 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138016 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138065 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138091 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138152 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138221 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138275 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138302 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138340 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138368 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138407 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138471 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138498 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138534 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138566 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138584 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138628 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138645 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138673 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138698 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138715 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138766 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138816 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138850 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138901 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138921 00:32:36.868 Removing: /var/run/dpdk/spdk_pid138943 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139011 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139053 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139096 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139129 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139153 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139174 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139199 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139238 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139267 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139284 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139380 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139487 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139652 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139688 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139739 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139806 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139838 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139885 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139912 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139956 00:32:36.868 Removing: /var/run/dpdk/spdk_pid139989 00:32:36.868 Removing: /var/run/dpdk/spdk_pid140073 00:32:36.868 Removing: /var/run/dpdk/spdk_pid140132 00:32:36.868 Removing: /var/run/dpdk/spdk_pid140203 00:32:36.868 Removing: /var/run/dpdk/spdk_pid140471 00:32:36.868 Removing: /var/run/dpdk/spdk_pid140592 00:32:37.136 Removing: /var/run/dpdk/spdk_pid140640 00:32:37.136 Removing: /var/run/dpdk/spdk_pid140734 00:32:37.136 Removing: /var/run/dpdk/spdk_pid140851 00:32:37.136 Removing: /var/run/dpdk/spdk_pid140889 00:32:37.136 Removing: /var/run/dpdk/spdk_pid141182 00:32:37.137 Removing: /var/run/dpdk/spdk_pid141353 00:32:37.137 Removing: /var/run/dpdk/spdk_pid141473 00:32:37.137 Removing: /var/run/dpdk/spdk_pid141536 00:32:37.137 Removing: /var/run/dpdk/spdk_pid141568 00:32:37.137 Removing: /var/run/dpdk/spdk_pid141651 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142206 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142256 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142600 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142726 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142832 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142906 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142937 00:32:37.137 Removing: /var/run/dpdk/spdk_pid142975 00:32:37.137 Removing: /var/run/dpdk/spdk_pid144424 00:32:37.137 Removing: /var/run/dpdk/spdk_pid144574 00:32:37.137 Removing: /var/run/dpdk/spdk_pid144591 00:32:37.137 Removing: /var/run/dpdk/spdk_pid144617 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145119 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145254 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145410 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145489 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145551 00:32:37.137 Removing: /var/run/dpdk/spdk_pid145882 00:32:37.137 Removing: /var/run/dpdk/spdk_pid146081 00:32:37.137 Removing: /var/run/dpdk/spdk_pid146207 00:32:37.137 Removing: /var/run/dpdk/spdk_pid146308 00:32:37.137 Removing: /var/run/dpdk/spdk_pid146383 00:32:37.137 Removing: /var/run/dpdk/spdk_pid146421 00:32:37.137 Clean 00:32:37.137 killing process with pid 93904 00:32:37.137 killing process with pid 93988 00:32:37.137 10:46:30 -- common/autotest_common.sh@1436 -- # return 0 00:32:37.137 10:46:30 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:37.137 10:46:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:37.137 10:46:30 -- common/autotest_common.sh@10 -- # set +x 00:32:37.137 10:46:31 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:37.137 10:46:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:37.137 10:46:31 -- common/autotest_common.sh@10 -- # set +x 00:32:37.395 10:46:31 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:37.395 10:46:31 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:37.395 10:46:31 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:37.395 10:46:31 -- spdk/autotest.sh@394 -- # hash lcov 00:32:37.395 10:46:31 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:37.395 10:46:31 -- spdk/autotest.sh@396 -- # hostname 00:32:37.395 10:46:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:37.654 geninfo: WARNING: invalid characters removed from testname! 00:33:16.356 10:47:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.642 10:47:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.170 10:47:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:25.458 10:47:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.992 10:47:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:30.522 10:47:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:33.048 10:47:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:33.048 10:47:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:33.048 10:47:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:33.048 10:47:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.048 10:47:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.048 10:47:26 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:33.048 10:47:26 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:33.048 10:47:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:33.048 10:47:26 -- paths/export.sh@5 -- $ export PATH 00:33:33.048 10:47:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:33.048 10:47:26 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:33.048 10:47:26 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:33.048 10:47:26 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720781246.XXXXXX 00:33:33.048 10:47:26 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720781246.qs0e5g 00:33:33.048 10:47:26 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:33.048 10:47:26 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:33.048 10:47:26 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:33.048 10:47:26 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:33.048 10:47:26 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:33.048 10:47:26 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:33.048 10:47:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:33.048 10:47:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:33.048 10:47:26 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:33.048 10:47:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:33.048 10:47:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:33.048 10:47:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:33.048 10:47:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:33.048 10:47:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:33.048 10:47:26 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:33.048 10:47:26 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:33:33.048 10:47:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:33.048 10:47:26 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:33.048 10:47:26 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:33:33.048 10:47:26 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:33.048 10:47:26 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:33.048 10:47:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:33.048 10:47:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:33.048 10:47:26 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:33.048 10:47:26 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:33:33.048 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:33.048 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:33.305 Using 'verbs' RDMA provider 00:33:46.068 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:58.270 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:58.270 Creating mk/config.mk...done. 00:33:58.270 Creating mk/cc.flags.mk...done. 00:33:58.270 Type 'make' to build. 00:33:58.270 10:47:52 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:58.529 make[1]: Nothing to be done for 'all'. 00:34:03.863 The Meson build system 00:34:03.863 Version: 1.4.0 00:34:03.863 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:34:03.863 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:34:03.863 Build type: native build 00:34:03.863 Program cat found: YES (/usr/bin/cat) 00:34:03.863 Project name: DPDK 00:34:03.863 Project version: 23.11.0 00:34:03.863 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:34:03.863 C linker for the host machine: cc ld.bfd 2.34 00:34:03.863 Host machine cpu family: x86_64 00:34:03.863 Host machine cpu: x86_64 00:34:03.863 Message: ## Building in Developer Mode ## 00:34:03.863 Program pkg-config found: YES (/usr/bin/pkg-config) 00:34:03.863 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:34:03.863 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:34:03.863 Program python3 found: YES (/usr/bin/python3) 00:34:03.863 Program cat found: YES (/usr/bin/cat) 00:34:03.863 Compiler for C supports arguments -march=native: YES 00:34:03.863 Checking for size of "void *" : 8 00:34:03.863 Checking for size of "void *" : 8 (cached) 00:34:03.863 Library m found: YES 00:34:03.863 Library numa found: YES 00:34:03.863 Has header "numaif.h" : YES 00:34:03.863 Library fdt found: NO 00:34:03.863 Library execinfo found: NO 00:34:03.863 Has header "execinfo.h" : YES 00:34:03.863 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:34:03.863 Run-time dependency libarchive found: NO (tried pkgconfig) 00:34:03.863 Run-time dependency libbsd found: NO (tried pkgconfig) 00:34:03.863 Run-time dependency jansson found: NO (tried pkgconfig) 00:34:03.863 Run-time dependency openssl found: YES 1.1.1f 00:34:03.863 Run-time dependency libpcap found: NO (tried pkgconfig) 00:34:03.863 Library pcap found: NO 00:34:03.863 Compiler for C supports arguments -Wcast-qual: YES 00:34:03.863 Compiler for C supports arguments -Wdeprecated: YES 00:34:03.863 Compiler for C supports arguments -Wformat: YES 00:34:03.863 Compiler for C supports arguments -Wformat-nonliteral: YES 00:34:03.863 Compiler for C supports arguments -Wformat-security: YES 00:34:03.863 Compiler for C supports arguments -Wmissing-declarations: YES 00:34:03.863 Compiler for C supports arguments -Wmissing-prototypes: YES 00:34:03.863 Compiler for C supports arguments -Wnested-externs: YES 00:34:03.863 Compiler for C supports arguments -Wold-style-definition: YES 00:34:03.863 Compiler for C supports arguments -Wpointer-arith: YES 00:34:03.863 Compiler for C supports arguments -Wsign-compare: YES 00:34:03.863 Compiler for C supports arguments -Wstrict-prototypes: YES 00:34:03.863 Compiler for C supports arguments -Wundef: YES 00:34:03.863 Compiler for C supports arguments -Wwrite-strings: YES 00:34:03.863 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:34:03.863 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:34:03.863 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:34:03.863 Program objdump found: YES (/usr/bin/objdump) 00:34:03.863 Compiler for C supports arguments -mavx512f: YES 00:34:03.863 Checking if "AVX512 checking" compiles: YES 00:34:03.863 Fetching value of define "__SSE4_2__" : 1 00:34:03.863 Fetching value of define "__AES__" : 1 00:34:03.863 Fetching value of define "__AVX__" : 1 00:34:03.863 Fetching value of define "__AVX2__" : 1 00:34:03.863 Fetching value of define "__AVX512BW__" : (undefined) 00:34:03.863 Fetching value of define "__AVX512CD__" : (undefined) 00:34:03.863 Fetching value of define "__AVX512DQ__" : (undefined) 00:34:03.863 Fetching value of define "__AVX512F__" : (undefined) 00:34:03.863 Fetching value of define "__AVX512VL__" : (undefined) 00:34:03.863 Fetching value of define "__PCLMUL__" : 1 00:34:03.863 Fetching value of define "__RDRND__" : 1 00:34:03.863 Fetching value of define "__RDSEED__" : 1 00:34:03.863 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:34:03.863 Fetching value of define "__znver1__" : (undefined) 00:34:03.863 Fetching value of define "__znver2__" : (undefined) 00:34:03.863 Fetching value of define "__znver3__" : (undefined) 00:34:03.863 Fetching value of define "__znver4__" : (undefined) 00:34:03.863 Compiler for C supports arguments -ffat-lto-objects: YES 00:34:03.863 Library asan found: YES 00:34:03.863 Compiler for C supports arguments -Wno-format-truncation: YES 00:34:03.863 Message: lib/log: Defining dependency "log" 00:34:03.863 Message: lib/kvargs: Defining dependency "kvargs" 00:34:03.863 Message: lib/telemetry: Defining dependency "telemetry" 00:34:03.863 Library rt found: YES 00:34:03.863 Checking for function "getentropy" : NO 00:34:03.863 Message: lib/eal: Defining dependency "eal" 00:34:03.863 Message: lib/ring: Defining dependency "ring" 00:34:03.863 Message: lib/rcu: Defining dependency "rcu" 00:34:03.863 Message: lib/mempool: Defining dependency "mempool" 00:34:03.863 Message: lib/mbuf: Defining dependency "mbuf" 00:34:03.863 Fetching value of define "__PCLMUL__" : 1 (cached) 00:34:03.863 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:34:03.863 Compiler for C supports arguments -mpclmul: YES 00:34:03.863 Compiler for C supports arguments -maes: YES 00:34:03.863 Compiler for C supports arguments -mavx512f: YES (cached) 00:34:03.863 Compiler for C supports arguments -mavx512bw: YES 00:34:03.863 Compiler for C supports arguments -mavx512dq: YES 00:34:03.863 Compiler for C supports arguments -mavx512vl: YES 00:34:03.863 Compiler for C supports arguments -mvpclmulqdq: YES 00:34:03.863 Compiler for C supports arguments -mavx2: YES 00:34:03.863 Compiler for C supports arguments -mavx: YES 00:34:03.863 Message: lib/net: Defining dependency "net" 00:34:03.863 Message: lib/meter: Defining dependency "meter" 00:34:03.863 Message: lib/ethdev: Defining dependency "ethdev" 00:34:03.863 Message: lib/pci: Defining dependency "pci" 00:34:03.864 Message: lib/cmdline: Defining dependency "cmdline" 00:34:03.864 Message: lib/hash: Defining dependency "hash" 00:34:03.864 Message: lib/timer: Defining dependency "timer" 00:34:03.864 Message: lib/compressdev: Defining dependency "compressdev" 00:34:03.864 Message: lib/cryptodev: Defining dependency "cryptodev" 00:34:03.864 Message: lib/dmadev: Defining dependency "dmadev" 00:34:03.864 Compiler for C supports arguments -Wno-cast-qual: YES 00:34:03.864 Message: lib/power: Defining dependency "power" 00:34:03.864 Message: lib/reorder: Defining dependency "reorder" 00:34:03.864 Message: lib/security: Defining dependency "security" 00:34:03.864 Has header "linux/userfaultfd.h" : YES 00:34:03.864 Has header "linux/vduse.h" : NO 00:34:03.864 Message: lib/vhost: Defining dependency "vhost" 00:34:03.864 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:34:03.864 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:34:03.864 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:34:03.864 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:34:03.864 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:34:03.864 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:34:03.864 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:34:03.864 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:34:03.864 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:34:03.864 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:34:03.864 Program doxygen found: YES (/usr/bin/doxygen) 00:34:03.864 Configuring doxy-api-html.conf using configuration 00:34:03.864 Configuring doxy-api-man.conf using configuration 00:34:03.864 Program mandb found: YES (/usr/bin/mandb) 00:34:03.864 Program sphinx-build found: NO 00:34:03.864 Configuring rte_build_config.h using configuration 00:34:03.864 Message: 00:34:03.864 ================= 00:34:03.864 Applications Enabled 00:34:03.864 ================= 00:34:03.864 00:34:03.864 apps: 00:34:03.864 00:34:03.864 00:34:03.864 Message: 00:34:03.864 ================= 00:34:03.864 Libraries Enabled 00:34:03.864 ================= 00:34:03.864 00:34:03.864 libs: 00:34:03.864 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:34:03.864 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:34:03.864 cryptodev, dmadev, power, reorder, security, vhost, 00:34:03.864 00:34:03.864 Message: 00:34:03.864 =============== 00:34:03.864 Drivers Enabled 00:34:03.864 =============== 00:34:03.864 00:34:03.864 common: 00:34:03.864 00:34:03.864 bus: 00:34:03.864 pci, vdev, 00:34:03.864 mempool: 00:34:03.864 ring, 00:34:03.864 dma: 00:34:03.864 00:34:03.864 net: 00:34:03.864 00:34:03.864 crypto: 00:34:03.864 00:34:03.864 compress: 00:34:03.864 00:34:03.864 vdpa: 00:34:03.864 00:34:03.864 00:34:03.864 Message: 00:34:03.864 ================= 00:34:03.864 Content Skipped 00:34:03.864 ================= 00:34:03.864 00:34:03.864 apps: 00:34:03.864 dumpcap: explicitly disabled via build config 00:34:03.864 graph: explicitly disabled via build config 00:34:03.864 pdump: explicitly disabled via build config 00:34:03.864 proc-info: explicitly disabled via build config 00:34:03.864 test-acl: explicitly disabled via build config 00:34:03.864 test-bbdev: explicitly disabled via build config 00:34:03.864 test-cmdline: explicitly disabled via build config 00:34:03.864 test-compress-perf: explicitly disabled via build config 00:34:03.864 test-crypto-perf: explicitly disabled via build config 00:34:03.864 test-dma-perf: explicitly disabled via build config 00:34:03.864 test-eventdev: explicitly disabled via build config 00:34:03.864 test-fib: explicitly disabled via build config 00:34:03.864 test-flow-perf: explicitly disabled via build config 00:34:03.864 test-gpudev: explicitly disabled via build config 00:34:03.864 test-mldev: explicitly disabled via build config 00:34:03.864 test-pipeline: explicitly disabled via build config 00:34:03.864 test-pmd: explicitly disabled via build config 00:34:03.864 test-regex: explicitly disabled via build config 00:34:03.864 test-sad: explicitly disabled via build config 00:34:03.864 test-security-perf: explicitly disabled via build config 00:34:03.864 00:34:03.864 libs: 00:34:03.864 metrics: explicitly disabled via build config 00:34:03.864 acl: explicitly disabled via build config 00:34:03.864 bbdev: explicitly disabled via build config 00:34:03.864 bitratestats: explicitly disabled via build config 00:34:03.864 bpf: explicitly disabled via build config 00:34:03.864 cfgfile: explicitly disabled via build config 00:34:03.864 distributor: explicitly disabled via build config 00:34:03.864 efd: explicitly disabled via build config 00:34:03.864 eventdev: explicitly disabled via build config 00:34:03.864 dispatcher: explicitly disabled via build config 00:34:03.864 gpudev: explicitly disabled via build config 00:34:03.864 gro: explicitly disabled via build config 00:34:03.864 gso: explicitly disabled via build config 00:34:03.864 ip_frag: explicitly disabled via build config 00:34:03.864 jobstats: explicitly disabled via build config 00:34:03.864 latencystats: explicitly disabled via build config 00:34:03.864 lpm: explicitly disabled via build config 00:34:03.864 member: explicitly disabled via build config 00:34:03.864 pcapng: explicitly disabled via build config 00:34:03.864 rawdev: explicitly disabled via build config 00:34:03.864 regexdev: explicitly disabled via build config 00:34:03.864 mldev: explicitly disabled via build config 00:34:03.864 rib: explicitly disabled via build config 00:34:03.864 sched: explicitly disabled via build config 00:34:03.864 stack: explicitly disabled via build config 00:34:03.864 ipsec: explicitly disabled via build config 00:34:03.864 pdcp: explicitly disabled via build config 00:34:03.864 fib: explicitly disabled via build config 00:34:03.864 port: explicitly disabled via build config 00:34:03.864 pdump: explicitly disabled via build config 00:34:03.864 table: explicitly disabled via build config 00:34:03.864 pipeline: explicitly disabled via build config 00:34:03.864 graph: explicitly disabled via build config 00:34:03.864 node: explicitly disabled via build config 00:34:03.864 00:34:03.864 drivers: 00:34:03.864 common/cpt: not in enabled drivers build config 00:34:03.864 common/dpaax: not in enabled drivers build config 00:34:03.864 common/iavf: not in enabled drivers build config 00:34:03.864 common/idpf: not in enabled drivers build config 00:34:03.864 common/mvep: not in enabled drivers build config 00:34:03.864 common/octeontx: not in enabled drivers build config 00:34:03.864 bus/auxiliary: not in enabled drivers build config 00:34:03.864 bus/cdx: not in enabled drivers build config 00:34:03.864 bus/dpaa: not in enabled drivers build config 00:34:03.864 bus/fslmc: not in enabled drivers build config 00:34:03.864 bus/ifpga: not in enabled drivers build config 00:34:03.864 bus/platform: not in enabled drivers build config 00:34:03.864 bus/vmbus: not in enabled drivers build config 00:34:03.864 common/cnxk: not in enabled drivers build config 00:34:03.864 common/mlx5: not in enabled drivers build config 00:34:03.864 common/nfp: not in enabled drivers build config 00:34:03.864 common/qat: not in enabled drivers build config 00:34:03.864 common/sfc_efx: not in enabled drivers build config 00:34:03.864 mempool/bucket: not in enabled drivers build config 00:34:03.864 mempool/cnxk: not in enabled drivers build config 00:34:03.864 mempool/dpaa: not in enabled drivers build config 00:34:03.864 mempool/dpaa2: not in enabled drivers build config 00:34:03.864 mempool/octeontx: not in enabled drivers build config 00:34:03.864 mempool/stack: not in enabled drivers build config 00:34:03.864 dma/cnxk: not in enabled drivers build config 00:34:03.864 dma/dpaa: not in enabled drivers build config 00:34:03.864 dma/dpaa2: not in enabled drivers build config 00:34:03.864 dma/hisilicon: not in enabled drivers build config 00:34:03.864 dma/idxd: not in enabled drivers build config 00:34:03.864 dma/ioat: not in enabled drivers build config 00:34:03.864 dma/skeleton: not in enabled drivers build config 00:34:03.864 net/af_packet: not in enabled drivers build config 00:34:03.864 net/af_xdp: not in enabled drivers build config 00:34:03.864 net/ark: not in enabled drivers build config 00:34:03.864 net/atlantic: not in enabled drivers build config 00:34:03.864 net/avp: not in enabled drivers build config 00:34:03.864 net/axgbe: not in enabled drivers build config 00:34:03.864 net/bnx2x: not in enabled drivers build config 00:34:03.864 net/bnxt: not in enabled drivers build config 00:34:03.864 net/bonding: not in enabled drivers build config 00:34:03.864 net/cnxk: not in enabled drivers build config 00:34:03.864 net/cpfl: not in enabled drivers build config 00:34:03.864 net/cxgbe: not in enabled drivers build config 00:34:03.864 net/dpaa: not in enabled drivers build config 00:34:03.864 net/dpaa2: not in enabled drivers build config 00:34:03.864 net/e1000: not in enabled drivers build config 00:34:03.864 net/ena: not in enabled drivers build config 00:34:03.864 net/enetc: not in enabled drivers build config 00:34:03.864 net/enetfec: not in enabled drivers build config 00:34:03.864 net/enic: not in enabled drivers build config 00:34:03.864 net/failsafe: not in enabled drivers build config 00:34:03.864 net/fm10k: not in enabled drivers build config 00:34:03.864 net/gve: not in enabled drivers build config 00:34:03.864 net/hinic: not in enabled drivers build config 00:34:03.864 net/hns3: not in enabled drivers build config 00:34:03.864 net/i40e: not in enabled drivers build config 00:34:03.864 net/iavf: not in enabled drivers build config 00:34:03.864 net/ice: not in enabled drivers build config 00:34:03.864 net/idpf: not in enabled drivers build config 00:34:03.864 net/igc: not in enabled drivers build config 00:34:03.864 net/ionic: not in enabled drivers build config 00:34:03.864 net/ipn3ke: not in enabled drivers build config 00:34:03.864 net/ixgbe: not in enabled drivers build config 00:34:03.864 net/mana: not in enabled drivers build config 00:34:03.864 net/memif: not in enabled drivers build config 00:34:03.864 net/mlx4: not in enabled drivers build config 00:34:03.864 net/mlx5: not in enabled drivers build config 00:34:03.864 net/mvneta: not in enabled drivers build config 00:34:03.864 net/mvpp2: not in enabled drivers build config 00:34:03.864 net/netvsc: not in enabled drivers build config 00:34:03.864 net/nfb: not in enabled drivers build config 00:34:03.864 net/nfp: not in enabled drivers build config 00:34:03.864 net/ngbe: not in enabled drivers build config 00:34:03.864 net/null: not in enabled drivers build config 00:34:03.864 net/octeontx: not in enabled drivers build config 00:34:03.864 net/octeon_ep: not in enabled drivers build config 00:34:03.864 net/pcap: not in enabled drivers build config 00:34:03.864 net/pfe: not in enabled drivers build config 00:34:03.864 net/qede: not in enabled drivers build config 00:34:03.865 net/ring: not in enabled drivers build config 00:34:03.865 net/sfc: not in enabled drivers build config 00:34:03.865 net/softnic: not in enabled drivers build config 00:34:03.865 net/tap: not in enabled drivers build config 00:34:03.865 net/thunderx: not in enabled drivers build config 00:34:03.865 net/txgbe: not in enabled drivers build config 00:34:03.865 net/vdev_netvsc: not in enabled drivers build config 00:34:03.865 net/vhost: not in enabled drivers build config 00:34:03.865 net/virtio: not in enabled drivers build config 00:34:03.865 net/vmxnet3: not in enabled drivers build config 00:34:03.865 raw/*: missing internal dependency, "rawdev" 00:34:03.865 crypto/armv8: not in enabled drivers build config 00:34:03.865 crypto/bcmfs: not in enabled drivers build config 00:34:03.865 crypto/caam_jr: not in enabled drivers build config 00:34:03.865 crypto/ccp: not in enabled drivers build config 00:34:03.865 crypto/cnxk: not in enabled drivers build config 00:34:03.865 crypto/dpaa_sec: not in enabled drivers build config 00:34:03.865 crypto/dpaa2_sec: not in enabled drivers build config 00:34:03.865 crypto/ipsec_mb: not in enabled drivers build config 00:34:03.865 crypto/mlx5: not in enabled drivers build config 00:34:03.865 crypto/mvsam: not in enabled drivers build config 00:34:03.865 crypto/nitrox: not in enabled drivers build config 00:34:03.865 crypto/null: not in enabled drivers build config 00:34:03.865 crypto/octeontx: not in enabled drivers build config 00:34:03.865 crypto/openssl: not in enabled drivers build config 00:34:03.865 crypto/scheduler: not in enabled drivers build config 00:34:03.865 crypto/uadk: not in enabled drivers build config 00:34:03.865 crypto/virtio: not in enabled drivers build config 00:34:03.865 compress/isal: not in enabled drivers build config 00:34:03.865 compress/mlx5: not in enabled drivers build config 00:34:03.865 compress/octeontx: not in enabled drivers build config 00:34:03.865 compress/zlib: not in enabled drivers build config 00:34:03.865 regex/*: missing internal dependency, "regexdev" 00:34:03.865 ml/*: missing internal dependency, "mldev" 00:34:03.865 vdpa/ifc: not in enabled drivers build config 00:34:03.865 vdpa/mlx5: not in enabled drivers build config 00:34:03.865 vdpa/nfp: not in enabled drivers build config 00:34:03.865 vdpa/sfc: not in enabled drivers build config 00:34:03.865 event/*: missing internal dependency, "eventdev" 00:34:03.865 baseband/*: missing internal dependency, "bbdev" 00:34:03.865 gpu/*: missing internal dependency, "gpudev" 00:34:03.865 00:34:03.865 00:34:03.865 Build targets in project: 85 00:34:03.865 00:34:03.865 DPDK 23.11.0 00:34:03.865 00:34:03.865 User defined options 00:34:03.865 default_library : static 00:34:03.865 libdir : lib 00:34:03.865 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:03.865 b_lto : true 00:34:03.865 b_sanitize : address 00:34:03.865 c_args : -fPIC -Werror 00:34:03.865 c_link_args : 00:34:03.865 cpu_instruction_set: native 00:34:03.865 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:34:03.865 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:34:03.865 enable_docs : false 00:34:03.865 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:34:03.865 enable_kmods : false 00:34:03.865 tests : false 00:34:03.865 00:34:03.865 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:34:04.432 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:34:04.432 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:34:04.432 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:34:04.432 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:34:04.432 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:34:04.432 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:34:04.691 [6/264] Linking static target lib/librte_kvargs.a 00:34:04.691 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:34:04.691 [8/264] Linking static target lib/librte_log.a 00:34:04.691 [9/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:34:04.691 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:34:04.691 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:34:04.950 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:34:04.950 [13/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:34:04.950 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:34:04.950 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:34:04.950 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:34:05.208 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:34:05.208 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:34:05.208 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:34:05.208 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:34:05.208 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:34:05.467 [22/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.467 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:34:05.467 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:34:05.467 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:34:05.725 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:34:05.725 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:34:05.725 [28/264] Linking target lib/librte_log.so.24.0 00:34:05.725 [29/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:34:05.725 [30/264] Linking static target lib/librte_telemetry.a 00:34:05.725 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:34:05.725 [32/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:34:05.725 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:34:05.984 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:34:05.984 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:34:05.984 [36/264] Linking target lib/librte_kvargs.so.24.0 00:34:05.984 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:34:05.984 [38/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:34:05.984 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:34:05.984 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:34:05.984 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:34:05.984 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:34:06.243 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:34:06.243 [44/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:34:06.502 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:34:06.502 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:34:06.502 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:34:06.502 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:34:06.502 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:34:06.502 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:34:06.502 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:34:06.502 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:34:06.761 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:34:06.761 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:34:06.761 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:34:06.761 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:34:06.761 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:34:06.761 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:34:06.761 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:34:07.020 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:34:07.020 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:34:07.020 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:34:07.020 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:34:07.020 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:34:07.020 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:34:07.020 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:34:07.278 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:34:07.278 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:34:07.278 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:34:07.278 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:34:07.278 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:34:07.278 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:34:07.537 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:34:07.537 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:34:07.537 [75/264] Linking target lib/librte_telemetry.so.24.0 00:34:07.537 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:34:07.537 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:34:07.537 [78/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:34:07.537 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:34:07.796 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:34:07.796 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:34:07.796 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:34:07.796 [83/264] Linking static target lib/librte_ring.a 00:34:07.796 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:34:07.796 [85/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:34:07.796 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:34:08.055 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:34:08.055 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:34:08.313 [89/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:34:08.313 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:34:08.313 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:34:08.313 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:34:08.313 [93/264] Linking static target lib/librte_eal.a 00:34:08.313 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:34:08.313 [95/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:34:08.313 [96/264] Linking static target lib/librte_mempool.a 00:34:08.313 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:34:08.572 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:34:08.572 [99/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:34:08.572 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:34:08.572 [101/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:34:08.572 [102/264] Linking static target lib/librte_rcu.a 00:34:08.572 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:34:08.830 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:34:08.830 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:34:08.830 [106/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:34:08.830 [107/264] Linking static target lib/librte_net.a 00:34:08.830 [108/264] Linking static target lib/librte_meter.a 00:34:09.089 [109/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:34:09.089 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:34:09.089 [111/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:34:09.089 [112/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:34:09.089 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:34:09.089 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:34:09.347 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:34:09.347 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:34:09.605 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:34:09.863 [118/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:34:09.863 [119/264] Linking static target lib/librte_mbuf.a 00:34:09.863 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:34:09.863 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:34:10.122 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:34:10.122 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:34:10.122 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:34:10.122 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:34:10.381 [126/264] Linking static target lib/librte_pci.a 00:34:10.381 [127/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:34:10.381 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:34:10.381 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:34:10.381 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:34:10.381 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:34:10.381 [132/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:10.640 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:34:10.640 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:34:10.640 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:34:10.640 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:34:10.640 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:34:10.640 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:34:10.640 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:34:10.640 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:34:10.640 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:34:10.640 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:34:10.640 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:34:10.899 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:34:10.899 [145/264] Linking static target lib/librte_cmdline.a 00:34:11.158 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:34:11.158 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:34:11.158 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:34:11.417 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:34:11.417 [150/264] Linking static target lib/librte_timer.a 00:34:11.417 [151/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:34:11.676 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:34:11.676 [153/264] Linking static target lib/librte_compressdev.a 00:34:11.676 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:34:11.676 [155/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:34:11.676 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:34:11.676 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:34:11.935 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:34:11.935 [159/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:34:11.935 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:34:11.935 [161/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:34:12.194 [162/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:34:12.194 [163/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:12.453 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:34:12.453 [165/264] Linking static target lib/librte_dmadev.a 00:34:12.453 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:34:12.453 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:34:12.453 [168/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:34:12.712 [169/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:34:12.712 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:34:12.712 [171/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:34:12.712 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:12.971 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:34:12.971 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:34:13.229 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:34:13.229 [176/264] Linking static target lib/librte_power.a 00:34:13.229 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:34:13.229 [178/264] Linking static target lib/librte_reorder.a 00:34:13.488 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:34:13.488 [180/264] Linking static target lib/librte_security.a 00:34:13.488 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:34:13.488 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:34:13.746 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:34:13.746 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:34:13.746 [185/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:13.746 [186/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:34:14.003 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:14.003 [188/264] Linking static target lib/librte_cryptodev.a 00:34:14.261 [189/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:14.261 [190/264] Linking static target lib/librte_ethdev.a 00:34:14.519 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:14.519 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:14.519 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:14.777 [194/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:14.777 [195/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:14.777 [196/264] Linking static target lib/librte_hash.a 00:34:14.777 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:15.035 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:15.294 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:15.294 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:15.553 [201/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:15.553 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:15.553 [203/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:15.811 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:15.811 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:15.811 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:15.811 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:16.070 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:16.070 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:16.070 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:16.070 [211/264] Linking static target drivers/librte_bus_vdev.a 00:34:16.070 [212/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:16.070 [213/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:16.070 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:16.070 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:16.070 [216/264] Linking static target drivers/librte_bus_pci.a 00:34:16.328 [217/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:16.328 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:16.328 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:16.587 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:16.587 [221/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:16.587 [222/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:16.587 [223/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:16.587 [224/264] Linking static target drivers/librte_mempool_ring.a 00:34:20.770 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:24.958 [226/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:34:25.217 [227/264] Linking target lib/librte_eal.so.24.0 00:34:25.476 [228/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:34:25.734 [229/264] Linking target lib/librte_meter.so.24.0 00:34:25.734 [230/264] Linking target lib/librte_pci.so.24.0 00:34:25.734 [231/264] Linking target lib/librte_ring.so.24.0 00:34:25.734 [232/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:34:25.734 [233/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:34:25.734 [234/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:34:25.993 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:34:25.993 [236/264] Linking target lib/librte_timer.so.24.0 00:34:25.993 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:34:26.251 [238/264] Linking target lib/librte_dmadev.so.24.0 00:34:26.251 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:34:26.819 [240/264] Linking target lib/librte_mempool.so.24.0 00:34:26.819 [241/264] Linking target lib/librte_rcu.so.24.0 00:34:26.819 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:34:26.819 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:34:27.077 [244/264] Linking target drivers/librte_bus_pci.so.24.0 00:34:27.336 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:34:28.712 [246/264] Linking target lib/librte_mbuf.so.24.0 00:34:28.713 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:34:29.288 [248/264] Linking target lib/librte_reorder.so.24.0 00:34:29.288 [249/264] Linking target lib/librte_compressdev.so.24.0 00:34:29.853 [250/264] Linking target lib/librte_net.so.24.0 00:34:29.853 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:34:31.228 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:34:31.228 [253/264] Linking target lib/librte_cmdline.so.24.0 00:34:31.228 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:34:31.486 [255/264] Linking target lib/librte_security.so.24.0 00:34:34.016 [256/264] Linking target lib/librte_hash.so.24.0 00:34:34.017 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:34:42.194 [258/264] Linking target lib/librte_ethdev.so.24.0 00:34:42.194 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:34:43.129 [260/264] Linking target lib/librte_power.so.24.0 00:34:46.414 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:34:46.414 [262/264] Linking static target lib/librte_vhost.a 00:34:47.787 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:34.451 [264/264] Linking target lib/librte_vhost.so.24.0 00:35:34.451 INFO: autodetecting backend as ninja 00:35:34.451 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:34.451 CC lib/log/log.o 00:35:34.451 CC lib/log/log_deprecated.o 00:35:34.451 CC lib/log/log_flags.o 00:35:34.451 CC lib/ut_mock/mock.o 00:35:34.451 CC lib/ut/ut.o 00:35:34.451 LIB libspdk_ut_mock.a 00:35:34.451 LIB libspdk_log.a 00:35:34.451 LIB libspdk_ut.a 00:35:34.709 CXX lib/trace_parser/trace.o 00:35:34.709 CC lib/util/bit_array.o 00:35:34.709 CC lib/ioat/ioat.o 00:35:34.709 CC lib/util/base64.o 00:35:34.709 CC lib/util/cpuset.o 00:35:34.709 CC lib/util/crc16.o 00:35:34.709 CC lib/util/crc32.o 00:35:34.709 CC lib/util/crc32c.o 00:35:34.709 CC lib/dma/dma.o 00:35:34.709 CC lib/vfio_user/host/vfio_user_pci.o 00:35:34.709 CC lib/vfio_user/host/vfio_user.o 00:35:34.709 CC lib/util/crc32_ieee.o 00:35:34.709 CC lib/util/crc64.o 00:35:34.968 CC lib/util/dif.o 00:35:34.968 LIB libspdk_dma.a 00:35:34.968 CC lib/util/fd.o 00:35:34.968 CC lib/util/file.o 00:35:34.968 CC lib/util/hexlify.o 00:35:34.968 CC lib/util/iov.o 00:35:34.968 LIB libspdk_ioat.a 00:35:34.968 CC lib/util/math.o 00:35:34.968 LIB libspdk_vfio_user.a 00:35:34.968 CC lib/util/pipe.o 00:35:34.968 CC lib/util/strerror_tls.o 00:35:34.968 CC lib/util/string.o 00:35:34.968 CC lib/util/uuid.o 00:35:34.968 CC lib/util/fd_group.o 00:35:34.968 CC lib/util/xor.o 00:35:34.968 CC lib/util/zipf.o 00:35:35.536 LIB libspdk_util.a 00:35:35.536 LIB libspdk_trace_parser.a 00:35:35.536 CC lib/env_dpdk/env.o 00:35:35.536 CC lib/vmd/vmd.o 00:35:35.536 CC lib/vmd/led.o 00:35:35.536 CC lib/conf/conf.o 00:35:35.536 CC lib/env_dpdk/memory.o 00:35:35.536 CC lib/env_dpdk/pci.o 00:35:35.536 CC lib/env_dpdk/init.o 00:35:35.536 CC lib/idxd/idxd.o 00:35:35.536 CC lib/json/json_parse.o 00:35:35.536 CC lib/rdma/common.o 00:35:35.536 CC lib/rdma/rdma_verbs.o 00:35:35.794 CC lib/json/json_util.o 00:35:35.794 LIB libspdk_conf.a 00:35:35.794 CC lib/json/json_write.o 00:35:35.794 CC lib/idxd/idxd_user.o 00:35:35.794 CC lib/env_dpdk/threads.o 00:35:35.794 CC lib/env_dpdk/pci_ioat.o 00:35:35.794 LIB libspdk_rdma.a 00:35:35.794 CC lib/env_dpdk/pci_virtio.o 00:35:35.794 CC lib/env_dpdk/pci_vmd.o 00:35:35.794 CC lib/env_dpdk/pci_idxd.o 00:35:35.794 CC lib/env_dpdk/pci_event.o 00:35:35.794 CC lib/env_dpdk/sigbus_handler.o 00:35:36.053 CC lib/env_dpdk/pci_dpdk.o 00:35:36.053 LIB libspdk_json.a 00:35:36.053 LIB libspdk_idxd.a 00:35:36.053 LIB libspdk_vmd.a 00:35:36.053 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:36.053 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:36.053 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:36.053 CC lib/jsonrpc/jsonrpc_server.o 00:35:36.053 CC lib/jsonrpc/jsonrpc_client.o 00:35:36.053 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:36.311 LIB libspdk_jsonrpc.a 00:35:36.311 CC lib/rpc/rpc.o 00:35:36.311 LIB libspdk_env_dpdk.a 00:35:36.570 LIB libspdk_rpc.a 00:35:36.570 CC lib/notify/notify_rpc.o 00:35:36.570 CC lib/notify/notify.o 00:35:36.570 CC lib/trace/trace.o 00:35:36.570 CC lib/sock/sock.o 00:35:36.570 CC lib/trace/trace_rpc.o 00:35:36.570 CC lib/trace/trace_flags.o 00:35:36.570 CC lib/sock/sock_rpc.o 00:35:36.829 LIB libspdk_notify.a 00:35:36.829 LIB libspdk_trace.a 00:35:36.829 LIB libspdk_sock.a 00:35:36.829 CC lib/thread/thread.o 00:35:36.829 CC lib/thread/iobuf.o 00:35:36.829 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:36.829 CC lib/nvme/nvme_fabric.o 00:35:36.829 CC lib/nvme/nvme_ctrlr.o 00:35:36.829 CC lib/nvme/nvme_pcie_common.o 00:35:36.829 CC lib/nvme/nvme_ns_cmd.o 00:35:36.829 CC lib/nvme/nvme_ns.o 00:35:36.829 CC lib/nvme/nvme_pcie.o 00:35:36.829 CC lib/nvme/nvme_qpair.o 00:35:37.089 CC lib/nvme/nvme.o 00:35:37.348 CC lib/nvme/nvme_quirks.o 00:35:37.348 CC lib/nvme/nvme_transport.o 00:35:37.348 CC lib/nvme/nvme_discovery.o 00:35:37.606 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:37.607 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:37.607 CC lib/nvme/nvme_tcp.o 00:35:37.607 CC lib/nvme/nvme_opal.o 00:35:37.607 CC lib/nvme/nvme_io_msg.o 00:35:37.607 LIB libspdk_thread.a 00:35:37.607 CC lib/nvme/nvme_poll_group.o 00:35:37.865 CC lib/nvme/nvme_zns.o 00:35:37.866 CC lib/nvme/nvme_cuse.o 00:35:37.866 CC lib/accel/accel.o 00:35:37.866 CC lib/accel/accel_rpc.o 00:35:37.866 CC lib/accel/accel_sw.o 00:35:38.124 CC lib/blob/blobstore.o 00:35:38.124 CC lib/nvme/nvme_vfio_user.o 00:35:38.124 CC lib/nvme/nvme_rdma.o 00:35:38.124 CC lib/blob/request.o 00:35:38.124 CC lib/blob/zeroes.o 00:35:38.124 CC lib/init/json_config.o 00:35:38.124 CC lib/virtio/virtio.o 00:35:38.383 CC lib/blob/blob_bs_dev.o 00:35:38.383 CC lib/init/subsystem.o 00:35:38.383 CC lib/init/subsystem_rpc.o 00:35:38.383 CC lib/virtio/virtio_vhost_user.o 00:35:38.383 CC lib/virtio/virtio_vfio_user.o 00:35:38.383 CC lib/virtio/virtio_pci.o 00:35:38.383 CC lib/init/rpc.o 00:35:38.641 LIB libspdk_accel.a 00:35:38.641 LIB libspdk_init.a 00:35:38.641 LIB libspdk_virtio.a 00:35:38.641 CC lib/bdev/bdev.o 00:35:38.641 CC lib/bdev/bdev_rpc.o 00:35:38.641 CC lib/bdev/scsi_nvme.o 00:35:38.641 CC lib/bdev/bdev_zone.o 00:35:38.641 CC lib/bdev/part.o 00:35:38.641 CC lib/event/app.o 00:35:38.642 CC lib/event/reactor.o 00:35:38.642 CC lib/event/log_rpc.o 00:35:38.642 CC lib/event/app_rpc.o 00:35:38.900 CC lib/event/scheduler_static.o 00:35:38.900 LIB libspdk_nvme.a 00:35:38.900 LIB libspdk_event.a 00:35:39.467 LIB libspdk_blob.a 00:35:39.467 CC lib/blobfs/tree.o 00:35:39.467 CC lib/blobfs/blobfs.o 00:35:39.467 CC lib/lvol/lvol.o 00:35:40.032 LIB libspdk_bdev.a 00:35:40.032 LIB libspdk_blobfs.a 00:35:40.032 LIB libspdk_lvol.a 00:35:40.032 CC lib/nvmf/ctrlr.o 00:35:40.032 CC lib/nvmf/ctrlr_bdev.o 00:35:40.032 CC lib/nvmf/ctrlr_discovery.o 00:35:40.032 CC lib/nvmf/subsystem.o 00:35:40.032 CC lib/scsi/dev.o 00:35:40.032 CC lib/nvmf/nvmf.o 00:35:40.032 CC lib/nvmf/nvmf_rpc.o 00:35:40.032 CC lib/scsi/lun.o 00:35:40.032 CC lib/nbd/nbd.o 00:35:40.032 CC lib/ftl/ftl_core.o 00:35:40.290 CC lib/scsi/port.o 00:35:40.290 CC lib/nvmf/transport.o 00:35:40.290 CC lib/scsi/scsi.o 00:35:40.290 CC lib/scsi/scsi_bdev.o 00:35:40.290 CC lib/nbd/nbd_rpc.o 00:35:40.290 CC lib/nvmf/tcp.o 00:35:40.291 CC lib/ftl/ftl_init.o 00:35:40.291 CC lib/scsi/scsi_pr.o 00:35:40.291 CC lib/scsi/scsi_rpc.o 00:35:40.550 CC lib/scsi/task.o 00:35:40.550 LIB libspdk_nbd.a 00:35:40.550 CC lib/ftl/ftl_layout.o 00:35:40.550 CC lib/nvmf/rdma.o 00:35:40.550 CC lib/ftl/ftl_debug.o 00:35:40.550 CC lib/ftl/ftl_io.o 00:35:40.550 CC lib/ftl/ftl_sb.o 00:35:40.550 CC lib/ftl/ftl_l2p.o 00:35:40.550 LIB libspdk_scsi.a 00:35:40.550 CC lib/ftl/ftl_l2p_flat.o 00:35:40.808 CC lib/ftl/ftl_nv_cache.o 00:35:40.808 CC lib/ftl/ftl_band.o 00:35:40.808 CC lib/ftl/ftl_band_ops.o 00:35:40.808 CC lib/ftl/ftl_writer.o 00:35:40.808 CC lib/ftl/ftl_rq.o 00:35:40.808 CC lib/ftl/ftl_reloc.o 00:35:40.808 CC lib/iscsi/conn.o 00:35:40.808 CC lib/iscsi/init_grp.o 00:35:40.808 CC lib/ftl/ftl_l2p_cache.o 00:35:40.808 CC lib/iscsi/iscsi.o 00:35:41.065 CC lib/iscsi/md5.o 00:35:41.065 CC lib/ftl/ftl_p2l.o 00:35:41.065 CC lib/ftl/mngt/ftl_mngt.o 00:35:41.065 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:41.065 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:41.065 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:41.065 CC lib/vhost/vhost.o 00:35:41.323 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:41.323 CC lib/vhost/vhost_rpc.o 00:35:41.323 CC lib/vhost/vhost_scsi.o 00:35:41.323 CC lib/vhost/vhost_blk.o 00:35:41.323 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:41.323 CC lib/vhost/rte_vhost_user.o 00:35:41.323 CC lib/iscsi/param.o 00:35:41.323 LIB libspdk_nvmf.a 00:35:41.323 CC lib/iscsi/portal_grp.o 00:35:41.581 CC lib/iscsi/tgt_node.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:41.581 CC lib/iscsi/iscsi_subsystem.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:41.581 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:41.838 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:41.838 CC lib/iscsi/iscsi_rpc.o 00:35:41.838 CC lib/iscsi/task.o 00:35:41.838 CC lib/ftl/utils/ftl_conf.o 00:35:41.838 CC lib/ftl/utils/ftl_md.o 00:35:41.838 CC lib/ftl/utils/ftl_mempool.o 00:35:41.838 CC lib/ftl/utils/ftl_bitmap.o 00:35:41.838 CC lib/ftl/utils/ftl_property.o 00:35:41.838 LIB libspdk_vhost.a 00:35:41.838 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:41.838 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:41.838 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:41.838 LIB libspdk_iscsi.a 00:35:42.095 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:42.095 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:42.095 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:42.095 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:42.095 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:42.095 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:42.095 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:42.095 CC lib/ftl/base/ftl_base_dev.o 00:35:42.095 CC lib/ftl/base/ftl_base_bdev.o 00:35:42.406 LIB libspdk_ftl.a 00:35:42.406 CC module/env_dpdk/env_dpdk_rpc.o 00:35:42.406 CC module/blob/bdev/blob_bdev.o 00:35:42.406 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:42.406 CC module/accel/error/accel_error.o 00:35:42.406 CC module/accel/ioat/accel_ioat.o 00:35:42.406 CC module/sock/posix/posix.o 00:35:42.406 CC module/scheduler/gscheduler/gscheduler.o 00:35:42.406 CC module/accel/iaa/accel_iaa.o 00:35:42.406 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:42.406 CC module/accel/dsa/accel_dsa.o 00:35:42.665 LIB libspdk_env_dpdk_rpc.a 00:35:42.665 CC module/accel/dsa/accel_dsa_rpc.o 00:35:42.665 LIB libspdk_scheduler_gscheduler.a 00:35:42.665 LIB libspdk_scheduler_dpdk_governor.a 00:35:42.665 CC module/accel/error/accel_error_rpc.o 00:35:42.665 CC module/accel/ioat/accel_ioat_rpc.o 00:35:42.665 LIB libspdk_scheduler_dynamic.a 00:35:42.665 CC module/accel/iaa/accel_iaa_rpc.o 00:35:42.665 LIB libspdk_blob_bdev.a 00:35:42.665 LIB libspdk_accel_dsa.a 00:35:42.665 LIB libspdk_accel_ioat.a 00:35:42.665 LIB libspdk_accel_error.a 00:35:42.665 LIB libspdk_accel_iaa.a 00:35:42.922 CC module/blobfs/bdev/blobfs_bdev.o 00:35:42.922 CC module/bdev/delay/vbdev_delay.o 00:35:42.922 CC module/bdev/malloc/bdev_malloc.o 00:35:42.922 CC module/bdev/error/vbdev_error.o 00:35:42.922 CC module/bdev/gpt/gpt.o 00:35:42.922 CC module/bdev/lvol/vbdev_lvol.o 00:35:42.922 CC module/bdev/null/bdev_null.o 00:35:42.922 CC module/bdev/passthru/vbdev_passthru.o 00:35:42.922 CC module/bdev/nvme/bdev_nvme.o 00:35:42.922 LIB libspdk_sock_posix.a 00:35:42.922 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:42.922 CC module/bdev/gpt/vbdev_gpt.o 00:35:42.922 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:42.922 CC module/bdev/error/vbdev_error_rpc.o 00:35:42.922 CC module/bdev/null/bdev_null_rpc.o 00:35:42.922 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:42.922 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:43.179 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:43.179 LIB libspdk_blobfs_bdev.a 00:35:43.179 LIB libspdk_bdev_gpt.a 00:35:43.179 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:43.179 LIB libspdk_bdev_error.a 00:35:43.179 LIB libspdk_bdev_malloc.a 00:35:43.179 LIB libspdk_bdev_null.a 00:35:43.179 LIB libspdk_bdev_delay.a 00:35:43.179 CC module/bdev/nvme/nvme_rpc.o 00:35:43.179 LIB libspdk_bdev_passthru.a 00:35:43.179 CC module/bdev/nvme/bdev_mdns_client.o 00:35:43.179 CC module/bdev/raid/bdev_raid.o 00:35:43.179 CC module/bdev/split/vbdev_split.o 00:35:43.179 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:43.179 CC module/bdev/aio/bdev_aio.o 00:35:43.436 CC module/bdev/aio/bdev_aio_rpc.o 00:35:43.436 CC module/bdev/ftl/bdev_ftl.o 00:35:43.436 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:43.436 LIB libspdk_bdev_lvol.a 00:35:43.436 CC module/bdev/nvme/vbdev_opal.o 00:35:43.436 CC module/bdev/split/vbdev_split_rpc.o 00:35:43.436 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:43.436 CC module/bdev/iscsi/bdev_iscsi.o 00:35:43.436 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:43.436 LIB libspdk_bdev_zone_block.a 00:35:43.436 LIB libspdk_bdev_aio.a 00:35:43.436 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:43.436 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:43.436 LIB libspdk_bdev_split.a 00:35:43.694 CC module/bdev/raid/bdev_raid_rpc.o 00:35:43.694 CC module/bdev/raid/bdev_raid_sb.o 00:35:43.694 CC module/bdev/raid/raid0.o 00:35:43.694 CC module/bdev/raid/raid1.o 00:35:43.694 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:43.694 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:43.694 LIB libspdk_bdev_ftl.a 00:35:43.694 LIB libspdk_bdev_iscsi.a 00:35:43.694 CC module/bdev/raid/concat.o 00:35:43.694 CC module/bdev/raid/raid5f.o 00:35:43.694 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:43.952 LIB libspdk_bdev_virtio.a 00:35:43.952 LIB libspdk_bdev_raid.a 00:35:43.952 LIB libspdk_bdev_nvme.a 00:35:44.211 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:44.211 CC module/event/subsystems/scheduler/scheduler.o 00:35:44.211 CC module/event/subsystems/iobuf/iobuf.o 00:35:44.211 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:44.211 CC module/event/subsystems/vmd/vmd.o 00:35:44.211 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:44.211 CC module/event/subsystems/sock/sock.o 00:35:44.470 LIB libspdk_event_vhost_blk.a 00:35:44.470 LIB libspdk_event_scheduler.a 00:35:44.470 LIB libspdk_event_sock.a 00:35:44.470 LIB libspdk_event_vmd.a 00:35:44.470 LIB libspdk_event_iobuf.a 00:35:44.470 CC module/event/subsystems/accel/accel.o 00:35:44.728 LIB libspdk_event_accel.a 00:35:44.728 CC module/event/subsystems/bdev/bdev.o 00:35:44.986 LIB libspdk_event_bdev.a 00:35:44.986 CC module/event/subsystems/nbd/nbd.o 00:35:44.986 CC module/event/subsystems/scsi/scsi.o 00:35:44.986 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:44.986 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:45.244 LIB libspdk_event_nbd.a 00:35:45.244 LIB libspdk_event_scsi.a 00:35:45.244 LIB libspdk_event_nvmf.a 00:35:45.244 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:45.244 CC module/event/subsystems/iscsi/iscsi.o 00:35:45.502 LIB libspdk_event_vhost_scsi.a 00:35:45.502 LIB libspdk_event_iscsi.a 00:35:45.502 CXX app/trace/trace.o 00:35:45.502 CC app/trace_record/trace_record.o 00:35:45.502 CC app/spdk_lspci/spdk_lspci.o 00:35:45.761 CC app/iscsi_tgt/iscsi_tgt.o 00:35:45.761 CC app/nvmf_tgt/nvmf_main.o 00:35:45.761 CC examples/accel/perf/accel_perf.o 00:35:45.761 CC app/spdk_tgt/spdk_tgt.o 00:35:45.761 CC test/accel/dif/dif.o 00:35:45.761 CC test/app/bdev_svc/bdev_svc.o 00:35:45.761 LINK spdk_lspci 00:35:45.761 CC test/bdev/bdevio/bdevio.o 00:35:45.761 LINK nvmf_tgt 00:35:45.761 LINK spdk_trace_record 00:35:46.019 LINK iscsi_tgt 00:35:46.019 LINK spdk_tgt 00:35:46.019 LINK bdev_svc 00:35:46.019 LINK accel_perf 00:35:46.019 LINK dif 00:35:46.019 LINK bdevio 00:35:46.019 LINK spdk_trace 00:35:52.617 CC app/spdk_nvme_perf/perf.o 00:35:55.911 LINK spdk_nvme_perf 00:36:01.177 CC app/spdk_nvme_identify/identify.o 00:36:05.365 LINK spdk_nvme_identify 00:36:31.914 CC app/spdk_nvme_discover/discovery_aer.o 00:36:32.852 LINK spdk_nvme_discover 00:36:35.384 CC examples/bdev/hello_world/hello_bdev.o 00:36:36.762 LINK hello_bdev 00:37:23.444 CC examples/bdev/bdevperf/bdevperf.o 00:37:23.444 LINK bdevperf 00:37:24.382 CC app/spdk_top/spdk_top.o 00:37:29.653 LINK spdk_top 00:37:39.628 CC app/vhost/vhost.o 00:37:40.603 LINK vhost 00:38:07.149 CC examples/blob/hello_world/hello_blob.o 00:38:07.149 LINK hello_blob 00:38:29.069 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:38:29.639 LINK nvme_fuzz 00:38:31.540 CC test/blobfs/mkfs/mkfs.o 00:38:32.106 CC app/spdk_dd/spdk_dd.o 00:38:32.107 LINK mkfs 00:38:34.642 LINK spdk_dd 00:38:49.513 TEST_HEADER include/spdk/config.h 00:38:49.513 CXX test/cpp_headers/accel_module.o 00:38:49.513 CXX test/cpp_headers/bit_pool.o 00:38:49.771 CXX test/cpp_headers/ioat.o 00:38:51.146 CXX test/cpp_headers/blobfs.o 00:38:52.523 CXX test/cpp_headers/notify.o 00:38:53.460 CXX test/cpp_headers/pipe.o 00:38:54.839 CXX test/cpp_headers/accel.o 00:38:56.745 CXX test/cpp_headers/file.o 00:38:57.681 CXX test/cpp_headers/version.o 00:38:58.248 CXX test/cpp_headers/trace_parser.o 00:38:59.623 CXX test/cpp_headers/opal_spec.o 00:39:00.999 CXX test/cpp_headers/uuid.o 00:39:02.376 CXX test/cpp_headers/likely.o 00:39:03.752 CXX test/cpp_headers/dif.o 00:39:05.146 CXX test/cpp_headers/memory.o 00:39:05.714 CC test/dma/test_dma/test_dma.o 00:39:06.280 CXX test/cpp_headers/vfio_user_pci.o 00:39:07.657 CXX test/cpp_headers/dma.o 00:39:07.916 LINK test_dma 00:39:08.853 CXX test/cpp_headers/nbd.o 00:39:09.112 CXX test/cpp_headers/conf.o 00:39:10.045 CXX test/cpp_headers/env_dpdk.o 00:39:11.420 CXX test/cpp_headers/nvmf_spec.o 00:39:12.795 CXX test/cpp_headers/iscsi_spec.o 00:39:14.172 CXX test/cpp_headers/mmio.o 00:39:15.548 CXX test/cpp_headers/json.o 00:39:16.483 CXX test/cpp_headers/opal.o 00:39:17.857 CXX test/cpp_headers/bdev.o 00:39:19.232 CXX test/cpp_headers/base64.o 00:39:20.606 CXX test/cpp_headers/blobfs_bdev.o 00:39:21.540 CXX test/cpp_headers/nvme_ocssd.o 00:39:22.916 CXX test/cpp_headers/fd.o 00:39:24.289 CXX test/cpp_headers/barrier.o 00:39:24.856 CXX test/cpp_headers/scsi_spec.o 00:39:25.787 CXX test/cpp_headers/zipf.o 00:39:26.353 CC test/env/mem_callbacks/mem_callbacks.o 00:39:26.919 CXX test/cpp_headers/nvmf.o 00:39:28.295 CXX test/cpp_headers/queue.o 00:39:28.295 CXX test/cpp_headers/xor.o 00:39:28.863 LINK mem_callbacks 00:39:29.428 CXX test/cpp_headers/cpuset.o 00:39:29.428 CXX test/cpp_headers/thread.o 00:39:30.803 CXX test/cpp_headers/bdev_zone.o 00:39:30.803 CC test/env/vtophys/vtophys.o 00:39:31.740 CXX test/cpp_headers/fd_group.o 00:39:31.999 LINK vtophys 00:39:32.932 CXX test/cpp_headers/tree.o 00:39:33.193 CXX test/cpp_headers/blob_bdev.o 00:39:35.123 CXX test/cpp_headers/crc64.o 00:39:35.688 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:35.947 CXX test/cpp_headers/assert.o 00:39:37.321 CXX test/cpp_headers/nvme_spec.o 00:39:38.696 CXX test/cpp_headers/endian.o 00:39:40.599 CXX test/cpp_headers/pci_ids.o 00:39:41.536 CXX test/cpp_headers/log.o 00:39:42.914 CXX test/cpp_headers/nvme_ocssd_spec.o 00:39:44.820 CXX test/cpp_headers/ftl.o 00:39:44.820 LINK iscsi_fuzz 00:39:46.198 CXX test/cpp_headers/config.o 00:39:46.198 CXX test/cpp_headers/vhost.o 00:39:47.575 CXX test/cpp_headers/bdev_module.o 00:39:49.479 CXX test/cpp_headers/nvme_intel.o 00:39:50.411 CXX test/cpp_headers/idxd_spec.o 00:39:51.785 CXX test/cpp_headers/crc16.o 00:39:53.167 CXX test/cpp_headers/nvme.o 00:39:54.575 CXX test/cpp_headers/stdinc.o 00:39:55.952 CXX test/cpp_headers/scsi.o 00:39:56.889 CXX test/cpp_headers/nvmf_fc_spec.o 00:39:58.265 CXX test/cpp_headers/idxd.o 00:39:59.639 CXX test/cpp_headers/hexlify.o 00:40:00.573 CXX test/cpp_headers/reduce.o 00:40:01.950 CXX test/cpp_headers/crc32.o 00:40:02.885 CXX test/cpp_headers/init.o 00:40:03.819 CXX test/cpp_headers/nvmf_transport.o 00:40:05.724 CXX test/cpp_headers/nvme_zns.o 00:40:06.657 CXX test/cpp_headers/vfio_user_spec.o 00:40:07.250 CXX test/cpp_headers/util.o 00:40:08.184 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:40:08.184 CXX test/cpp_headers/jsonrpc.o 00:40:08.184 CC test/event/event_perf/event_perf.o 00:40:09.119 LINK env_dpdk_post_init 00:40:09.119 CXX test/cpp_headers/env.o 00:40:09.119 LINK event_perf 00:40:10.051 CC test/event/reactor/reactor.o 00:40:10.051 CXX test/cpp_headers/nvmf_cmd.o 00:40:10.658 LINK reactor 00:40:11.592 CXX test/cpp_headers/lvol.o 00:40:12.966 CXX test/cpp_headers/histogram_data.o 00:40:13.899 CXX test/cpp_headers/event.o 00:40:14.833 CXX test/cpp_headers/trace.o 00:40:16.206 CXX test/cpp_headers/ioat_spec.o 00:40:17.580 CXX test/cpp_headers/string.o 00:40:18.955 CXX test/cpp_headers/ublk.o 00:40:19.521 CXX test/cpp_headers/bit_array.o 00:40:21.420 CXX test/cpp_headers/scheduler.o 00:40:22.796 CXX test/cpp_headers/blob.o 00:40:24.172 CXX test/cpp_headers/gpt_spec.o 00:40:25.547 CXX test/cpp_headers/sock.o 00:40:26.923 CXX test/cpp_headers/vmd.o 00:40:28.298 CXX test/cpp_headers/rpc.o 00:40:30.199 CC test/event/reactor_perf/reactor_perf.o 00:40:30.777 LINK reactor_perf 00:40:31.033 CC test/event/app_repeat/app_repeat.o 00:40:32.410 LINK app_repeat 00:40:47.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:40:47.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:40:50.075 LINK vhost_fuzz 00:40:50.333 CC examples/blob/cli/blobcli.o 00:40:52.864 LINK blobcli 00:40:57.051 CC examples/ioat/perf/perf.o 00:40:57.618 LINK ioat_perf 00:40:57.877 CC examples/ioat/verify/verify.o 00:40:59.254 LINK verify 00:41:07.396 CC test/env/memory/memory_ut.o 00:41:09.919 CC app/fio/nvme/fio_plugin.o 00:41:11.292 CC app/fio/bdev/fio_plugin.o 00:41:11.292 LINK memory_ut 00:41:11.859 CC test/app/histogram_perf/histogram_perf.o 00:41:12.488 LINK spdk_nvme 00:41:12.488 LINK histogram_perf 00:41:13.422 LINK spdk_bdev 00:41:23.394 CC test/env/pci/pci_ut.o 00:41:23.958 LINK pci_ut 00:41:25.926 CC test/app/jsoncat/jsoncat.o 00:41:26.490 LINK jsoncat 00:41:28.391 CC test/event/scheduler/scheduler.o 00:41:29.324 LINK scheduler 00:41:31.219 CC test/lvol/esnap/esnap.o 00:41:33.128 CC test/nvme/aer/aer.o 00:41:34.084 LINK aer 00:41:38.262 CC test/nvme/reset/reset.o 00:41:38.872 CC test/nvme/sgl/sgl.o 00:41:39.436 LINK reset 00:41:40.002 LINK sgl 00:41:46.564 CC test/nvme/e2edp/nvme_dp.o 00:41:46.564 LINK esnap 00:41:47.128 LINK nvme_dp 00:41:49.651 CC test/app/stub/stub.o 00:41:51.025 LINK stub 00:42:47.240 CC test/nvme/overhead/overhead.o 00:42:47.240 CC test/rpc_client/rpc_client_test.o 00:42:47.240 LINK rpc_client_test 00:42:47.240 LINK overhead 00:42:55.349 CC test/nvme/err_injection/err_injection.o 00:42:55.349 CC examples/nvme/hello_world/hello_world.o 00:42:56.283 LINK err_injection 00:42:57.656 LINK hello_world 00:43:07.620 CC examples/nvme/reconnect/reconnect.o 00:43:10.147 LINK reconnect 00:43:32.216 CC examples/nvme/nvme_manage/nvme_manage.o 00:43:32.781 CC examples/nvme/arbitration/arbitration.o 00:43:34.682 LINK nvme_manage 00:43:36.586 LINK arbitration 00:43:58.505 CC examples/sock/hello_world/hello_sock.o 00:44:00.400 LINK hello_sock 00:44:18.517 CC examples/vmd/lsvmd/lsvmd.o 00:44:18.517 LINK lsvmd 00:44:18.517 CC test/nvme/startup/startup.o 00:44:19.452 LINK startup 00:44:21.352 CC test/nvme/reserve/reserve.o 00:44:22.725 LINK reserve 00:44:30.927 CC test/nvme/simple_copy/simple_copy.o 00:44:31.493 LINK simple_copy 00:44:49.600 CC test/nvme/connect_stress/connect_stress.o 00:44:49.600 CC examples/nvme/hotplug/hotplug.o 00:44:49.600 LINK connect_stress 00:44:49.600 CC test/thread/poller_perf/poller_perf.o 00:44:49.600 LINK hotplug 00:44:49.600 LINK poller_perf 00:44:52.129 CC test/thread/lock/spdk_lock.o 00:44:57.476 LINK spdk_lock 00:45:12.344 CC examples/vmd/led/led.o 00:45:12.344 LINK led 00:45:18.923 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:45:19.857 LINK histogram_ut 00:45:20.424 CC test/unit/lib/accel/accel.c/accel_ut.o 00:45:20.682 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:45:26.006 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:45:26.263 CC test/unit/lib/blob/blob.c/blob_ut.o 00:45:27.640 LINK blob_bdev_ut 00:45:28.207 LINK accel_ut 00:45:32.390 CC examples/nvmf/nvmf/nvmf.o 00:45:33.324 LINK nvmf 00:45:34.698 CC test/nvme/boot_partition/boot_partition.o 00:45:34.698 LINK bdev_ut 00:45:34.955 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:45:34.955 LINK boot_partition 00:45:35.523 LINK tree_ut 00:45:35.780 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:45:35.780 CC examples/nvme/cmb_copy/cmb_copy.o 00:45:36.714 LINK cmb_copy 00:45:38.087 LINK blobfs_async_ut 00:45:38.345 CC test/nvme/compliance/nvme_compliance.o 00:45:38.911 CC test/nvme/fused_ordering/fused_ordering.o 00:45:39.169 LINK nvme_compliance 00:45:39.737 LINK fused_ordering 00:45:41.111 LINK blob_ut 00:45:47.690 CC test/nvme/doorbell_aers/doorbell_aers.o 00:45:48.626 LINK doorbell_aers 00:45:55.204 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:46:00.542 LINK blobfs_sync_ut 00:46:10.595 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:46:10.595 LINK blobfs_bdev_ut 00:46:15.879 CC examples/util/zipf/zipf.o 00:46:16.815 LINK zipf 00:46:21.000 CC examples/thread/thread/thread_ex.o 00:46:22.377 LINK thread 00:46:24.898 CC examples/idxd/perf/perf.o 00:46:25.833 CC examples/interrupt_tgt/interrupt_tgt.o 00:46:26.090 CC test/unit/lib/bdev/part.c/part_ut.o 00:46:26.090 LINK idxd_perf 00:46:26.655 LINK interrupt_tgt 00:46:27.649 CC examples/nvme/abort/abort.o 00:46:28.230 LINK abort 00:46:33.495 LINK part_ut 00:46:34.060 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:46:35.450 LINK pmr_persistence 00:46:36.015 CC test/unit/lib/event/app.c/app_ut.o 00:46:36.274 CC test/unit/lib/dma/dma.c/dma_ut.o 00:46:37.207 CC test/nvme/fdp/fdp.o 00:46:38.581 LINK dma_ut 00:46:38.581 LINK app_ut 00:46:38.863 LINK fdp 00:46:44.133 CC test/nvme/cuse/cuse.o 00:46:46.026 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:46:47.395 LINK scsi_nvme_ut 00:46:48.768 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:46:49.027 LINK cuse 00:46:52.308 LINK reactor_ut 00:46:54.832 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:46:57.361 LINK gpt_ut 00:46:57.927 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:47:00.459 LINK ioat_ut 00:47:03.734 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:47:06.261 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:47:07.196 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:47:08.128 LINK init_grp_ut 00:47:08.128 LINK conn_ut 00:47:11.476 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:47:14.755 LINK json_parse_ut 00:47:15.690 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:47:16.624 LINK vbdev_lvol_ut 00:47:19.927 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:47:21.834 LINK jsonrpc_server_ut 00:47:23.737 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:47:27.947 LINK bdev_ut 00:47:29.843 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:47:29.843 CC test/unit/lib/log/log.c/log_ut.o 00:47:30.409 LINK iscsi_ut 00:47:31.011 LINK log_ut 00:47:31.267 LINK json_util_ut 00:47:32.655 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:47:34.027 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:47:35.944 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:47:36.508 LINK json_write_ut 00:47:37.877 LINK bdev_raid_sb_ut 00:47:41.161 LINK bdev_raid_ut 00:47:43.686 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:47:43.944 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:47:43.944 CC test/unit/lib/iscsi/param.c/param_ut.o 00:47:45.316 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:47:45.316 LINK concat_ut 00:47:45.316 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:47:45.593 LINK param_ut 00:47:45.851 LINK raid1_ut 00:47:46.783 LINK portal_grp_ut 00:47:50.100 LINK lvol_ut 00:47:50.100 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:47:51.029 CC test/unit/lib/notify/notify.c/notify_ut.o 00:47:51.960 LINK notify_ut 00:47:52.217 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:47:52.217 LINK tgt_node_ut 00:47:53.599 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:47:53.599 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:47:54.173 LINK nvme_ut 00:47:54.173 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:47:54.741 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:47:54.999 LINK raid5f_ut 00:47:54.999 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:47:55.258 LINK dev_ut 00:47:55.516 LINK lun_ut 00:47:56.453 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:47:56.453 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:47:58.354 LINK nvme_ctrlr_cmd_ut 00:47:58.622 LINK tcp_ut 00:47:59.189 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:47:59.189 LINK nvme_ctrlr_ut 00:47:59.447 LINK ctrlr_ut 00:47:59.705 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:48:01.079 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:48:02.072 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:48:02.330 LINK bdev_zone_ut 00:48:02.588 LINK nvme_ctrlr_ocssd_cmd_ut 00:48:02.588 LINK scsi_ut 00:48:03.153 LINK subsystem_ut 00:48:04.522 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:48:05.894 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:48:05.894 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:48:06.152 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:48:06.410 LINK scsi_bdev_ut 00:48:06.668 LINK scsi_pr_ut 00:48:06.926 LINK vbdev_zone_block_ut 00:48:07.492 LINK nvme_ns_ut 00:48:07.492 CC test/unit/lib/sock/sock.c/sock_ut.o 00:48:07.492 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:48:07.492 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:48:08.426 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:48:08.683 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:48:08.941 LINK sock_ut 00:48:09.199 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:48:09.456 LINK ctrlr_discovery_ut 00:48:09.456 LINK ctrlr_bdev_ut 00:48:09.714 LINK nvme_ns_cmd_ut 00:48:09.972 LINK nvme_ns_ocssd_cmd_ut 00:48:10.229 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:48:11.161 LINK nvme_pcie_ut 00:48:11.161 LINK nvmf_ut 00:48:11.418 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:48:11.418 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:48:12.387 CC test/unit/lib/sock/posix.c/posix_ut.o 00:48:12.387 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:48:12.645 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:48:12.903 LINK posix_ut 00:48:13.161 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:48:13.161 LINK rdma_ut 00:48:13.419 LINK nvme_poll_group_ut 00:48:13.677 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:48:14.242 LINK transport_ut 00:48:14.242 LINK nvme_qpair_ut 00:48:14.500 LINK bdev_nvme_ut 00:48:14.500 LINK nvme_quirks_ut 00:48:15.433 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:48:15.690 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:48:17.062 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:48:17.327 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:48:18.715 LINK nvme_transport_ut 00:48:19.290 LINK nvme_io_msg_ut 00:48:20.226 CC test/unit/lib/thread/thread.c/thread_ut.o 00:48:20.226 LINK nvme_tcp_ut 00:48:20.226 LINK nvme_pcie_common_ut 00:48:21.161 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:48:22.094 CC test/unit/lib/util/base64.c/base64_ut.o 00:48:22.351 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:48:22.351 LINK nvme_fabric_ut 00:48:22.351 LINK thread_ut 00:48:22.608 LINK base64_ut 00:48:22.865 LINK pci_event_ut 00:48:23.123 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:48:23.686 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:48:23.686 LINK subsystem_ut 00:48:23.944 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:48:23.944 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:48:24.270 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:48:24.270 LINK rpc_ut 00:48:24.270 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:48:24.270 LINK idxd_user_ut 00:48:24.270 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:48:24.270 LINK bit_array_ut 00:48:24.532 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:48:24.532 LINK iobuf_ut 00:48:24.790 CC test/unit/lib/rdma/common.c/common_ut.o 00:48:25.048 LINK idxd_ut 00:48:25.048 LINK nvme_opal_ut 00:48:25.305 LINK common_ut 00:48:25.563 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:48:25.563 LINK vhost_ut 00:48:25.821 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:48:25.821 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:48:26.080 LINK cpuset_ut 00:48:26.338 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:48:26.597 LINK crc16_ut 00:48:26.855 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:48:26.855 LINK nvme_cuse_ut 00:48:26.855 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:48:26.855 LINK nvme_rdma_ut 00:48:26.855 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:48:27.113 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:48:27.113 LINK ftl_l2p_ut 00:48:27.371 LINK ftl_bitmap_ut 00:48:27.371 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:48:27.629 LINK ftl_io_ut 00:48:27.629 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:48:27.887 LINK crc32_ieee_ut 00:48:27.887 LINK ftl_band_ut 00:48:27.887 LINK ftl_mempool_ut 00:48:28.453 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:48:29.021 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:48:29.021 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:48:29.278 LINK crc32c_ut 00:48:29.278 LINK ftl_mngt_ut 00:48:29.278 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:48:29.278 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:48:29.535 CC test/unit/lib/util/dif.c/dif_ut.o 00:48:29.535 LINK crc64_ut 00:48:30.101 LINK ftl_sb_ut 00:48:30.101 CC test/unit/lib/util/iov.c/iov_ut.o 00:48:30.101 LINK ftl_layout_upgrade_ut 00:48:30.101 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:48:30.101 CC test/unit/lib/util/math.c/math_ut.o 00:48:30.101 CC test/unit/lib/util/string.c/string_ut.o 00:48:30.359 LINK iov_ut 00:48:30.359 LINK math_ut 00:48:30.359 LINK string_ut 00:48:30.359 LINK dif_ut 00:48:30.617 LINK pipe_ut 00:48:30.617 CC test/unit/lib/util/xor.c/xor_ut.o 00:48:30.876 LINK xor_ut 00:49:52.348 11:03:34 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:49:52.348 make[1]: Nothing to be done for 'clean'. 00:49:52.348 11:03:38 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:49:52.348 11:03:38 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:49:52.348 11:03:38 -- common/autotest_common.sh@10 -- $ set +x 00:49:52.348 11:03:38 -- spdk/autopackage.sh@48 -- $ timing_finish 00:49:52.348 11:03:38 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:52.348 11:03:38 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:52.348 11:03:38 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:52.348 + [[ -n 2375 ]] 00:49:52.348 + sudo kill 2375 00:49:52.348 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:49:52.358 [Pipeline] } 00:49:52.374 [Pipeline] // timeout 00:49:52.379 [Pipeline] } 00:49:52.396 [Pipeline] // stage 00:49:52.402 [Pipeline] } 00:49:52.422 [Pipeline] // catchError 00:49:52.432 [Pipeline] stage 00:49:52.434 [Pipeline] { (Stop VM) 00:49:52.450 [Pipeline] sh 00:49:52.729 + vagrant halt 00:49:56.013 ==> default: Halting domain... 00:50:05.990 [Pipeline] sh 00:50:06.269 + vagrant destroy -f 00:50:09.553 ==> default: Removing domain... 00:50:10.978 [Pipeline] sh 00:50:11.257 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_2/output 00:50:11.267 [Pipeline] } 00:50:11.287 [Pipeline] // stage 00:50:11.293 [Pipeline] } 00:50:11.313 [Pipeline] // dir 00:50:11.319 [Pipeline] } 00:50:11.340 [Pipeline] // wrap 00:50:11.347 [Pipeline] } 00:50:11.366 [Pipeline] // catchError 00:50:11.377 [Pipeline] stage 00:50:11.379 [Pipeline] { (Epilogue) 00:50:11.396 [Pipeline] sh 00:50:11.677 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:29.769 [Pipeline] catchError 00:50:29.770 [Pipeline] { 00:50:29.784 [Pipeline] sh 00:50:30.063 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:30.063 Artifacts sizes are good 00:50:30.070 [Pipeline] } 00:50:30.084 [Pipeline] // catchError 00:50:30.094 [Pipeline] archiveArtifacts 00:50:30.099 Archiving artifacts 00:50:30.445 [Pipeline] cleanWs 00:50:30.455 [WS-CLEANUP] Deleting project workspace... 00:50:30.455 [WS-CLEANUP] Deferred wipeout is used... 00:50:30.461 [WS-CLEANUP] done 00:50:30.462 [Pipeline] } 00:50:30.480 [Pipeline] // stage 00:50:30.486 [Pipeline] } 00:50:30.501 [Pipeline] // node 00:50:30.507 [Pipeline] End of Pipeline 00:50:30.539 Finished: SUCCESS